Dave Maass's Techdirt Profile

Dave Maass

About Dave Maass

Posted on Techdirt - 12 December 2025 @ 03:36pm

How Cops Are Using Flock Safety’s ALPR Network To Surveil Protesters And Activists

It’s no secret that 2025 has given Americans plenty to protest about. But as news cameras showed protesters filling streets of cities across the country, law enforcement officers—including U.S. Border Patrol agents—were quietly watching those same streets through different lenses: Flock Safety automated license plate readers (ALPRs) that tracked every passing car. 

Through an analysis of 10 months of nationwide searches on Flock Safety’s servers, we discovered that more than 50 federal, state, and local agencies ran hundreds of searches through Flock’s national network of surveillance data in connection with protest activity. In some cases, law enforcement specifically targeted known activist groups, demonstrating how mass surveillance technology increasingly threatens our freedom to demonstrate. 

Flock Safety provides ALPR technology to thousands of law enforcement agencies. The company installs cameras throughout their jurisdictions, and these cameras photograph every car that passes, documenting the license plate, color, make, model and other distinguishing characteristics. This data is paired with time and location, and uploaded to a massive searchable database. Flock Safety encourages agencies to share the data they collect broadly with other agencies across the country. It is common for an agency to search thousands of networks nationwide even when they don’t have reason to believe a targeted vehicle left the region. 

Via public records requests, EFF obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025. The data shows that agencies logged hundreds of searches related to the 50501 protests in February, the Hands Off protests in April, the No Kings protests in June and October, and other protests in between. 

The Tulsa Police Department in Oklahoma was one of the most consistent users of Flock Safety’s ALPR system for investigating protests, logging at least 38 such searches. This included running searches that corresponded to a protest against deportation raids in February, a protest at Tulsa City Hall in support of pro-Palestinian activist Mahmoud Khalil in March, and the No Kings protest in June. During the most recent No Kings protests in mid-October, agencies such as the Lisle Police Department in Illinois, the Oro Valley Police Department in Arizona, and the Putnam County (Tenn.) Sheriff’s Office all ran protest-related searches. 

While EFF and other civil liberties groups argue the law should require a search warrant for such searches, police are simply prompted to enter text into a “reason” field in the Flock Safety system. Usually this is only a few words–or even just one.

In these cases, that word was often just “protest.” 

Crime does sometimes occur at protests, whether that’s property damage, pick-pocketing, or clashes between groups on opposite sides of a protest. Some of these searches may have been tied to an actual crime that occurred, even though in most cases officers did not articulate a criminal offense when running the search. But the truth is, the only reason an officer is able to even search for a suspect at a protest is because ALPRs collected data on every single person who attended the protest. 

Search and Dissent 

2025 was an unprecedented year of street action. In June and again in October, thousands across the country mobilized under the banner of the “No Kings” movement—marches against government overreach, surveillance, and corporate power. By some estimates, the October demonstrations ranked among the largest single-day protests in U.S. history, filling the streets from Washington, D.C., to Portland, OR. 

EFF identified 19 agencies that logged dozens of searches associated with the No Kings protests in June and October 2025. In some cases the “No Kings” was explicitly used, while in others the term “protest” was used but coincided with the massive protests.

Law Enforcement Agencies that Ran Searches Corresponding with “No Kings” Rallies
* Anaheim Police Department, Calif.
* Arizona Department of Public Safety
* Beaumont Police Department, Texas
* Charleston Police Department, SC
* Flagler County Sheriff’s Office, Fla.
* Georgia State Patrol
* Lisle Police Department, Ill.
* Little Rock Police Department, Ark.
* Marion Police Department, Ohio
* Morristown Police Department, Tenn.
* Oro Valley Police Department, Ariz.
* Putnam County Sheriff’s Office, Tenn.
* Richmond Police Department, Va.
* Riverside County Sheriff’s Office, Calif.
* Salinas Police Department, Calif.
* San Bernardino County Sheriff’s Office, Calif.
* Spartanburg Police Department, SC
* Tempe Police Department, Ariz.
* Tulsa Police Department, Okla.
* US Border Patrol

For example: 

  • In Washington state, the Spokane County Sheriff’s Office listed “no kings” as the reason for three searches on June 15, 2025 [Note: date corrected]. The agency queried 95 camera networks, looking for vehicles matching the description of “work van,” “bus” or “box truck.” 
  • In Texas, the Beaumont Police Department ran six searches related to two vehicles on June 14, 2025, listing “KINGS DAY PROTEST” as the reason. The queries reached across 1,774 networks. 
  • In California, the San Bernardino County Sheriff’s Office ran a single search for a vehicle across 711 networks, logging “no king” as the reason. 
  • In Arizona, the Tempe Police Department made three searches for “ATL No Kings Protest” on June 15, 2025 searching through 425 networks. “ATL” is police code for “attempt to locate.” The agency appears to not have been looking for a particular plate, but for any red vehicle on the road during a certain time window.

But the No Kings protests weren’t the only demonstrations drawing law enforcement’s digital dragnet in 2025. 

For example:

  • In Nevada’s state capital, the Carson City Sheriff’s Office ran three searches that correspond to the February 50501 Protests against DOGE and the Trump administration. The agency searched for two vehicles across 178 networks with “protest” as the reason.
  • In Florida, the Seminole County Sheriff’s Office logged “protest” for five searches that correspond to a local May Day rally.
  • In Alabama, the Homewood Police Department logged four searches in early July 2025 for three vehicles with “PROTEST CASE” and “PROTEST INV.” in the reason field. The searches, which probed 1,308 networks, correspond to protests against the police shooting of Jabari Peoples.
  • In Texas, the Lubbock Police Department ran two searches for a Tennessee license plate on March 15 that corresponds to a rally to highlight the mental health impact of immigration policies. The searches hit 5,966 networks, with the logged reason “protest veh.”
  • In Michigan, Grand Rapids Police Department ran five searches that corresponded with the Stand Up and Fight Back Rally in February. The searches hit roughly 650 networks, with the reason logged as “Protest.”

Some agencies have adopted policies that prohibit using ALPRs for monitoring activities protected by the First Amendment. Yet many officers probed the nationwide network with terms like “protest” without articulating an actual crime under investigation.

In a few cases, police were using Flock’s ALPR network to investigate threats made against attendees or incidents where motorists opposed to the protests drove their vehicle into crowds. For example, throughout June 2025, an Arizona Department of Public Safety officer logged three searches for “no kings rock threat,” and a Wichita (Kan.) Police Department officer logged 22 searches for various license plates under the reason “Crime Stoppers Tip of causing harm during protests.”

Even when law enforcement is specifically looking for vehicles engaged in potentially criminal behavior such as threatening protesters, it cannot be ignored that mass surveillance systems work by collecting data on everyone driving to or near a protest—not just those under suspicion.

Border Patrol’s Expanding Reach 

As U.S. Border Patrol (USBP), ICE, and other federal agencies tasked with immigration enforcement have massively expanded operations into major cities, advocates for immigrants have responded through organized rallies, rapid-response confrontations, and extended presences at federal facilities. 

USBP has made extensive use of Flock Safety’s system for immigration enforcement, but also to target those who object to its tactics. In June, a few days after the No Kings Protest, USBP ran three searches for a vehicle using the descriptor “Portland Riots.” 

USBP also used the Flock Safety network to investigate a motorist who had “extended his middle finger” at Border Patrol vehicles that were transporting detainees. The motorist then allegedly drove in front of one of the vehicles and slowed down, forcing the Border Patrol vehicle to brake hard. An officer ran seven searches for his plate, citing “assault on agent” and “18 usc 111,” the federal criminal statute for assaulting, resisting or impeding a federal officer. The individual was charged in federal court in early August. 

USBP had access to the Flock system during a trial period in the first half of 2025, but the company says it has since paused the agency’s access to the system. However, Border Patrol and other federal immigration authorities have been able to access the system’s data through local agencies who have run searches on their behalf or even lent them logins

Targeting Animal Rights Activists

Law enforcement’s use of Flock’s ALPR network to surveil protesters isn’t limited to large-scale political demonstrations. Three agencies also used the system dozens of times to specifically target activists from Direct Action Everywhere (DxE), an animal-rights organization known for using civil disobedience tactics to expose conditions at factory farms.

Delaware State Police queried the Flock national network nine times in March 2025 related to DxE actions, logging reasons such as “DxE Protest Suspect Vehicle.” DxE advocates told EFF that these searches correspond to an investigation the organization undertook of a Mountaire Farms facility. 

Additionally, the California Highway Patrol logged dozens of searches related to a “DXE Operation” throughout the day on May 27, 2025. The organization says this corresponds with an annual convening in California that typically ends in a direct action. Participants leave the event early in the morning, then drive across the state to a predetermined but previously undisclosed protest site. Also in May, the Merced County Sheriff’s Office in California logged two searches related to “DXE activity.” 

As an organization engaged in direct activism, DxE has experienced criminal prosecution for its activities, and so the organization told EFF they were not surprised to learn they are under scrutiny from law enforcement, particularly considering how industrial farmers have collected and distributed their own intelligence to police.

The targeting of DxE activists reveals how ALPR surveillance extends beyond conventional and large-scale political protests to target groups engaged in activism that challenges powerful industries. For animal-rights activists, the knowledge that their vehicles are being tracked through a national surveillance network undeniably creates a chilling effect on their ability to organize and demonstrate.

Fighting Back Against ALPR 

ALPR systems are designed to capture information on every vehicle that passes within view. That means they don’t just capture data on “criminals” but on everyone, all the time—and that includes people engaged in their First Amendment right to publicly dissent. Police are sitting on massive troves of data that can reveal who attended a protest, and this data shows they are not afraid to use it. 

Our analysis only includes data where agencies explicitly mentioned protests or related terms in the “reason” field when documenting their search. It’s likely that scores more were conducted under less obvious pretexts and search reasons. According to our analysis, approximately 20 percent of all searches we reviewed listed vague language like “investigation,” “suspect,” and “query” in the reason field. Those terms could well be cover for spying on a protest, an abortion prosecution, or an officer stalking a spouse, and no one would be the wiser–including the agencies whose data was searched. Flock has said it will now require officers to select a specific crime under investigation, but that can and will also be used to obfuscate dubious searches. 

For protestors, this data should serve as confirmation that ALPR surveillance has been and will be used to target activities protected by the First Amendment. Depending on your threat model, this means you should think carefully about how you arrive at protests, and explore options such as by biking, walking, carpooling, taking public transportation, or simply parking a little further away from the action. Our Surveillance Self-Defense project has more information on steps you could take to protect your privacy when traveling to and attending a protest.

For local officials, this should serve as another example of how systems marketed as protecting your community may actually threaten the values your communities hold most dear. The best way to protect people is to shut down these camera networks.  

Everyone should have the right to speak up against injustice without ending up in a database. 

Originally posted to the EFF’s Deeplinks blog.

Posted on Techdirt - 17 November 2025 @ 03:46pm

License Plate Surveillance Logs Reveal Racist Policing Against Romani People

More than 80 law enforcement agencies across the United States have used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety automated license plate reader (ALPR) network, according to audit logs obtained and analyzed by the Electronic Frontier Foundation. 

When police run a search through the Flock Safety network, which links thousands of ALPR systems, they are prompted to leave a reason and/or case number for the search. Between June 2024 and October 2025, cops performed hundreds of searches for license plates using terms such as “roma” and “g*psy,” and in many instances, without any mention of a suspected crime. Other uses include “g*psy vehicle,” “g*psy group,” “possible g*psy,” “roma traveler” and “g*psy ruse,” perpetuating systemic harm by demeaning individuals based on their race or ethnicity. 

These queries were run through thousands of police departments’ systems—and it appears that none of these agencies flagged the searches as inappropriate. 

These searches are, by definition, racist. 

Word Choices and Flock Searches 

We are using the terms “Roma” and “Romani people” as umbrella terms, recognizing that they represent different but related groups. Since 2020, the U.S. federal government has officially recognized “Anti-Roma Racism” as including behaviors such as “stereotyping Roma as persons who engage in criminal behavior” and using the slur “g*psy.” According to the U.S. Department of State, this language “leads to the treatment of Roma as an alleged alien group and associates them with a series of pejorative stereotypes and distorted images that represent a specific form of racism.” 

Nevertheless, police officers have run hundreds of searches for license plates using the terms “roma” and “g*psy.” (Unlike the police ALPR queries we’ve uncovered, we substitute an asterisk for the Y to avoid repeating this racist slur). In many cases, these terms have been used on their own, with no mention of crime. In other cases, the terms have been used in contexts like “g*psy scam” and “roma burglary,” when ethnicity should have no relevance to how a crime is investigated or prosecuted. 

A “g*psy scam” and “roma burglary” do not exist in criminal law separate from any other type of fraud or burglary. Several agencies contacted by EFF have since acknowledged the inappropriate use and expressed efforts to address the issue internally. 

“The use of the term does not reflect the values or expected practices of our department,” a representative of the Palos Heights (IL) Police Department wrote to EFF after being confronted with two dozen searches involving the term “g*psy.” “We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language.”

Of course, the broader issue is that allowing “g*psy” or “Roma” as a reason for a search isn’t just offensive, it implies the criminalization an ethnic group. In fact, the Grand Prairie Police Department in Texas searched for “g*psy” six times while using Flock’s “Convoy” feature, which allows an agency to identify vehicles traveling together—in essence targeting an entire traveling community of Roma without specifying a crime. 

At the bottom of this post is a list of agencies and the terms they used when searching the Flock system. 

Anti-Roma Racism in an Age of Surveillance 

Racism against Romani people has been a problem for centuries, with one of its most horrific manifestations  during the Holocaust, when the Third Reich and its allies perpetuated genocide by murdering hundreds of thousands of Romani people and sterilizing thousands more. Despite efforts by the UN and EU to combat anti-Roma discrimination, this form of racism persists. As scholars Margareta Matache and Mary T. Bassett explain, it is perpetuated by modern American policing practices: 

In recent years, police departments have set up task forces specialised in “G*psy crimes”, appointed “G*psy crime” detectives, and organised police training courses on “G*psy criminality”. The National Association of Bunco Investigators (NABI), an organisation of law enforcement professionals focusing on “non-traditional organised crime”, has even created a database of individuals arrested or suspected of criminal activity, which clearly marked those who were Roma.

Thus, it is no surprise that a 2020 Harvard University survey of Romani Americans found that 4 out of 10 respondents reported being subjected to racial profiling by police. This demonstrates the ongoing challenges they face due to systemic racism and biased policing. 

Notably, many police agencies using surveillance technologies like ALPRs have adopted some sort of basic policy against biased policing or the use of these systems to target people based on race or ethnicity. But even when such policies are in place, an agency’s failure to enforce them allows these discriminatory practices to persist. These searches were also run through the systems of thousands of other police departments that may have their own policies and state laws that prohibit bias-based policing—yet none of those agencies appeared to have flagged the searches as inappropriate. 

The Flock search data in question here shows that surveillance technology exacerbates racism, and even well-meaning policies to address bias can quickly fall apart without proper oversight and accountability. 

Cops In Their Own Words

EFF reached out to a sample of the police departments that ran these searches. Here are five representative responses we received from police departments in Illinois, California, and Virginia. They do not inspire confidence.

1. Lake County Sheriff’s Office, IL 

A screen grab of three searches

In June 2025, the Lake County Sheriff’s Office ran three searches for a dark colored pick-up truck, using the reason: “G*PSY Scam.” The search covered 1,233 networks, representing 14,467 different ALPR devices. 

In response to EFF, a sheriff’s representative wrote via email:

“Thank you for reaching out and for bringing this to our attention.  We certainly understand your concern regarding the use of that terminology, which we do not condone or support, and we want to assure you that we are looking into the matter.

Any sort of discriminatory practice is strictly prohibited at our organization. If you have the time to take a look at our commitment to the community and our strong relationship with the community, I firmly believe you will see discrimination is not tolerated and is quite frankly repudiated by those serving in our organization. 

We appreciate you bringing this to our attention so we can look further into this and address it.”

2. Sacramento Police Department, CA

A screen grab of three searches

In May 2025, the Sacramento Police Department ran six searches using the term “g*psy.”  The search covered 468 networks, representing 12,885 different ALPR devices. 

In response to EFF, a police representative wrote:

“Thank you again for reaching out. We looked into the searches you mentioned and were able to confirm the entries. We’ve since reminded the team to be mindful about how they document investigative reasons. The entry reflected an investigative lead, not a disparaging reference. 

We appreciate the chance to clarify.”

3. Palos Heights Police Department, IL

A screen grab of three searches

In September 2024, the Palos Heights Police Department ran more than two dozen searches using terms such as “g*psy vehicle,” “g*psy scam” and “g*psy concrete vehicle.” Most searches hit roughly 1,000 networks. 

In response to EFF, a police representative said the searches were related to a singular criminal investigation into a vehicle involved in a “suspicious circumstance/fraudulent contracting incident” and is “not indicative of a general search based on racial or ethnic profiling.” However, the agency acknowledged the language was inappropriate: 

“The use of the term does not reflect the values or expected practices of our department. We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language.

We appreciate your outreach on this matter and the opportunity to provide clarification.”

4. Irvine Police Department, CA

A screen grab of three searches

In February and May 2025, the Irvine Police Department ran eight searches using the term “roma” in the reason field. The searches covered 1,420 networks, representing 29,364 different ALPR devices. 

In a call with EFF, an IPD representative explained that the cases were related to a series of organized thefts. However, they acknowledged the issue, saying, “I think it’s an opportunity for our agency to look at those entries and to use a case number or use a different term.” 

5. Fairfax County Police Department, VA

A screen grab of three searches

Between December 2024 and April 2025, the Fairfax County Police Department ran more than 150 searches involving terms such as “g*psy case” and “roma crew burglaries.” Fairfax County PD continued to defend its use of this language.

In response to EFF, a police representative wrote:

“Thank you for your inquiry. When conducting searches in investigative databases, our detectives must use the exact case identifiers, terms, or names connected to a criminal investigation in order to properly retrieve information. These entries reflect terminology already tied to specific cases and investigative files from other agencies, not a bias or judgment about any group of people. The use of such identifiers does not reflect bias or discrimination and is not inconsistent with our Bias-Based Policing policy within our Human Relations General Order.

A National Trend

Roma individuals and families are not the only ones being systematically and discriminatorily targeted by ALPR surveillance technologies. For example, Flock audit logs show agencies ran 400 more searches using terms targeting Traveller communities more generally, with a specific focus on Irish Travellers, often without any mention of a crime. 

Across the country, these tools are enabling and amplifying racial profiling by embedding longstanding policing biases into surveillance technologies. For example, data from Oak Park, IL, show that 84% of drivers stopped in Flock-related traffic incidents were Black—despite Black people making up only 19% of the local population. ALPR systems are far from being neutral tools for public safety and are increasingly being used to fuel discriminatory policing practices against historically marginalized people. 

The racially coded language in Flock’s logs mirrors long-standing patterns of discriminatory policing. Terms like “furtive movements,” “suspicious behavior,” and “high crime area” have always been cited by police to try to justify stops and searches of Black, Latine, and Native communities. These phrases might not appear in official logs because they’re embedded earlier in enforcement—in the traffic stop without clear cause, the undocumented stop-and-frisk, the intelligence bulletin flagging entire neighborhoods as suspect. They function invisibly until a body-worn camera, court filing, or audit brings them to light. Flock’s network didn’t create racial profiling; it industrialized it, turning deeply encoded and vague language into scalable surveillance that can search thousands of cameras across state lines. 

The Path Forward

U.S. Sen. Ron Wyden, D-OR, recently recommended that local governments reevaluate their decisions to install Flock Safety in their communities. We agree, but we also understand that sometimes elected officials need to see the abuse with their own eyes first. 

We know which agencies ran these racist searches, and they should be held accountable. But we also know that the vast majority of Flock Safety’s clients—thousands of police and sheriffs—also allowed those racist searches to run through their Flock Safety systems unchallenged. 

Elected officials must act decisively to address the racist policing enabled by Flock’s infrastructure. First, they should demand a complete audit of all ALPR searches conducted in their jurisdiction and a review of search logs to determine (a) whether their police agencies participated in discriminatory policing and (b) what safeguards, if any, exist to prevent such abuse. Second, officials should institute immediate restrictions on data-sharing through Flock’s nationwide network. As demonstrated by California law, for example, police agencies should not be able to share their ALPR data with federal authorities or out-of-state agencies, thus eliminating a vehicle for discriminatory searches spreading across state lines.

Ultimately, elected officials must terminate Flock Safety contracts entirely. The evidence is now clear: audit logs and internal policies alone cannot prevent a surveillance system from becoming a tool for racist policing. The fundamental architecture of Flock—thousands of cameras feeding into a nationwide searchable network—makes discrimination inevitable when enforcement mechanisms fail.

As Sen. Wyden astutely explained, “local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities.”

Table Overview and Notes

The following table compiles terms used by agencies to describe the reasons for searching the Flock Safety ALPR database. In a small number of cases, we removed additional information such as case numbers, specific incident details, and officers’ names that were present in the reason field. 

We removed one agency from the list due to the agency indicating that the word was a person’s name and not a reference to Romani people. 

In general, we did not include searches that used the term “Romanian,” although many of those may also be indicative of anti-Roma bias. We also did not include uses of “traveler” or “Traveller” when it did not include a clear ethnic modifier; however, we believe many of those searches are likely relevant.  

A text-based version of the spreadsheet is available here

Originally posted to the EFF’s Deeplinks blog.

Posted on Techdirt - 17 October 2025 @ 03:36pm

Flock Safety & Texas Sheriff Claimed License Plate Search Was For A Missing Person. It Was An Abortion Investigation.

New documents and court records obtained by EFF show that Texas deputies queried Flock Safety’s surveillance data in an abortion investigation, contradicting the narrative promoted by the company and the Johnson County Sheriff that she was “being searched for as a missing person,” and that “it was about her safety.” 

The new information shows that deputies had initiated a “death investigation” of a “non-viable fetus,” logged evidence of a woman’s self-managed abortion, and consulted prosecutors about possibly charging her. 

Johnson County Sheriff Adam King repeatedly denied the automated license plate reader (ALPR) search was related to enforcing Texas’s abortion ban, and Flock Safety called media accounts “false,” “misleading” and “clickbait.” However, according to a sworn affidavit by the lead detective, the case was in fact a death investigation in response to a report of an abortion, and deputies collected documentation of the abortion from the “reporting person,” her alleged romantic partner. The death investigation remained open for weeks, with detectives interviewing the woman and reviewing her text messages about the abortion. 

The documents show that the Johnson County District Attorney’s Office informed deputies that “the State could not statutorily charge [her] for taking the pill to cause the abortion or miscarriage of the non-viable fetus.”

An excerpt from the JCSO detective’s sworn affidavit.

The records include previously unreported details about the case that shocked public officials and reproductive justice advocates across the country when it was first reported by 404 Media in May. The case serves as a clear warning sign that when data from ALPRs is shared across state lines, it can put people at risk, including abortion seekers. And, in this case, the use may have run afoul of laws in Washington and Illinois.

A False Narrative Emerges

Last May, 404 Media obtained data revealing the Johnson County Sheriff’s Office conducted a nationwide search of more than 83,000 Flock ALPR cameras, giving the reason in the search log: “had an abortion, search for female.” Both the Sheriff’s Office and Flock Safety have attempted to downplay the search as akin to a search for a missing person, claiming deputies were only looking for the woman to “check on her welfare” and that officers found a large amount of blood at the scene – a claim now contradicted by the responding investigator’s affidavit. Flock Safety went so far as to assert that journalists and advocates covering the story intentionally misrepresented the facts, describing it as “misreporting” and “clickbait-driven.” 

As Flock wrote of EFF’s previous commentary on this case (bold in original statement): 

Earlier this month, there was purposefully misleading reporting that a Texas police officer with the Johnson County Sheriff’s Office used LPR “to target people seeking reproductive healthcare.” This organization is actively perpetuating narratives that have been proven false, even after the record has been corrected.

According to the Sheriff in Johnson County himself, this claim is unequivocally false.

… No charges were ever filed against the woman and she was never under criminal investigation by Johnson County. She was being searched for as a missing person, not as a suspect of a crime.

That sheriff has since been arrested and indicted on felony counts in an unrelated sexual harassment and whistleblower retaliation case. He has also been charged with aggravated perjury for allegedly lying to a grand jury. EFF filed public records requests with Johnson County to obtain a more definitive account of events.

The newly released incident report and affidavit unequivocally describe the case as a “death investigation” of a “non-viable fetus.” These documents also undermine the claim that the ALPR search was in response to a medical emergency, since, in fact, the abortion had occurred more than two weeks before deputies were called to investigate. 

In recent years, anti-abortion advocates and prosecutors have increasingly attempted to use “fetal homicide” and “wrongful death” statutes – originally intended to protect pregnant people from violence – to criminalize abortion and pregnancy loss. These laws, which exist in dozens of states, establish legal personhood of fetuses and can be weaponized against people who end their own pregnancies or experience a miscarriage. 

In fact, a new report from Pregnancy Justice found that in just the first two years since the Supreme Court’s decision in Dobbs, prosecutors initiated at least 412 cases charging pregnant people with crimes related to pregnancy, pregnancy loss, or birth–most under child neglect, endangerment, or abuse laws that were never intended to target pregnant people. Nine cases included allegations around individuals’ abortions, such as possession of abortion medication or attempts to obtain an abortion–instances just like this one. The report also highlights how, in many instances, prosecutors use tangentially related criminal charges to punish people for abortion, even when abortion itself is not illegal.

By framing their investigation of a self-administered abortion as a “death investigation” of a “non-viable fetus,” Texas law enforcement was signaling their intent to treat the woman’s self-managed abortion as a potential homicide, even though Texas law does not allow criminal charges to be brought against an individual for self-managing their own abortion. 

The Investigator’s Sworn Account

Over two days in April, the woman went through the process of taking medication to induce an abortion. Two weeks later, her partner–who would later be charged with domestic violence against her–reported her to the sheriff’s office. 

The documents confirm that the woman was not present at the home when the deputies “responded to the death (Non-viable fetus).” As part of the investigation, officers collected evidence that the man had assembled of the self-managed abortion, including photographs, the FedEx envelope the medication arrived in, and the instructions for self-administering the medication. 

Another Johnson County official ran two searches through the ALPR database with the note “had an abortion, search for female,” according to Flock Safety search logs obtained by EFF. The first search, which has not been previously reported, probed 1,295 Flock Safety networks–composed of 17,684 different cameras–going back one week. The second search, which was originally exposed by 404 Media, was expanded to a full month of data across 6,809 networks, including 83,345 cameras. Both searches listed the same case number that appears on the death investigation/incident report obtained by EFF. 

After collecting the evidence from the woman’s partner, the investigators say they consulted the district attorney’s office, only to be told they could not press charges against the woman. 

An excerpt from the JCSO detective’s sworn affidavit.

Nevertheless, when the subject showed up at the Sheriff’s office a week later, officers were under the impression that she came to “to tell her side of the story about the non-viable fetus.” They interviewed her, inspected text messages about the abortion on her phone, and watched her write a timeline of events. 

Only after all that did they learn that she actually wanted to report a violent assault by her partner–the same individual who had called the police to report her abortion. She alleged that less than an hour after the abortion, he choked her, put a gun to her head, and made her beg for her life. The man was ultimately charged in connection with the assault, and the case is ongoing. 

This documented account runs completely counter to what law enforcement and Flock have said publicly about the case. 

Johnson County Sheriff Adam King told 404 media: “Her family was worried that she was going to bleed to death, and we were trying to find her to get her to a hospital.” He later told the Dallas Morning News: “We were just trying to check on her welfare and get her to the doctor if needed, or to the hospital.”

The account by the detective on the scene makes no mention of concerned family members or a medical investigator. To the contrary, the affidavit says that they questioned the man as to why he “waited so long to report the incident,” and he responded that he needed to “process the event and call his family attorney.” The ALPR search was recorded 2.5 hours after the initial call came in, as documented in the investigation report.

The Desk Sergeant’s Report—One Month Later

EFF obtained a separate “case supplemental report” written by the sergeant who says he ran the May 9 ALPR searches. 

The sergeant was not present at the scene, and his account was written belatedly on June 5, almost a month after the incident and nearly a week after 404 Media had already published the sheriff’s alternative account of the Flock Safety search, kicking off a national controversy. The sheriff’s office provided this sergeant’s report to Dallas Morning News

In the report, the sergeant claims that the officers on the ground asked him to start “looking up” the woman due to there being “a large amount of blood” found at the residence—an unsubstantiated claim that is in conflict with the lead investigator’s affidavit. The sergeant repeatedly expresses that the situation was “not making sense.” He claims he was worried that the partner had hurt the woman and her children, so “to check their welfare,” he used TransUnion’s TLO commercial investigative database system to look up her address. Once he identified her vehicle, he ran the plate through the Flock database, returning hits in Dallas.

Two abortion-related searches in the JCSO’s Flock Safety ALPR audit log

The sergeant’s report, filed after the case attracted media attention, notably omits any mention of the abortion at the center of the investigation, although it does note that the caller claimed to have found a fetus. The report does not explain, or even address, why the sergeant used the phrase “had an abortion, search for female” as the official reason for the ALPR searches in the audit log. 

It’s also unclear why the sergeant submitted the supplemental report at all, weeks after the incident. By that time, the lead investigator had already filed a sworn affidavit that contradicted the sergeant’s account. For example, the investigator, who was on the scene, does not describe finding any blood or taking blood samples into evidence, only photographs of what the partner believed to be the fetus. 

One area where they concur: both reports are clearly marked as a “death investigation.” 

Correcting the Record

Since 404 Media first reported on this case, King has perpetuated the false narrative, telling reporters that the woman was never under investigation, that officers had not considered charges against her, and that “it was all about her safety.”

But here are the facts: 

  • The reports that have been released so far describe this as a death investigation.
  • The lead detective described himself as “working a death investigation… of a non-viable fetus” at the time he interviewed the woman (a week after the ALPR searches).
  • The detective wrote that they consulted the district attorney’s office about whether they could charge her for “taking the pill to cause the abortion or miscarriage of the non-viable fetus.” They were told they could not.
  • Investigators collected a lot of data, including photos and documentation of the abortion, and ran her through multiple databases. They even reviewed her text messages about the abortion. 
  • The death investigation was open for more than a month.

The death investigation was only marked closed in mid-June, weeks after 404 Media’s article and a mere days before the Dallas Morning News published its report, in which the sheriff inaccurately claimed the woman “was not under investigation at any point.”

Flock has promoted this unsupported narrative on its blog and in multimedia appearances. We did not reach out to Flock for comment on this article, as their communications director previously told us the company will not answer our inquiries until we “correct the record and admit to your audience that you purposefully spread misinformation which you know to be untrue” about this case. 

Consider the record corrected: It turns out the truth is even more damning than initially reported.

The Aftermath

In the aftermath of the original reporting, government officials began to take action. The networks searched by Johnson County included cameras in Illinois and Washington state, both states where abortion access is protected by law. Since then: 

  • The Illinois Secretary of State has announced his intent to “crack down on unlawful use of license plate reader data,” and urged the state’s Attorney General to investigate the matter. 
  • In California, which also has prohibitions on sharing ALPR out of state and for abortion-ban enforcement, the legislature cited the case in support of pending legislation to restrict ALPR use.
  • Ranking Members of the House Oversight Committee and one of its subcommittees launched a formal investigation into Flock’s role in “enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans.” 
  • Senator Ron Wyden secured a commitment from Flock to protect Oregonians’ data from out-of-state immigration and abortion-related queries.

In response to mounting pressure, Flock announced a series of new features supposedly designed to prevent future abuses. These include blocking “impermissible” searches, requiring that all searches include a “reason,” and implementing AI-driven audit alerts to flag suspicious activity. But as we’ve detailed elsewhere, these measures are cosmetic at best—easily circumvented by officers using vague search terms or reusing legitimate case numbers. The fundamental architecture that enabled the abuse remains unchanged. 

Meanwhile, as the news continued to harm the company’s sales, Flock CEO Garrett Langley embarked on a press tour to smear reporters and others who had raised alarms about the usage. In an interview with Forbes, he even doubled down and extolled the use of the ALPR in this case. 

So when I look at this, I go “this is everything’s working as it should be.” A family was concerned for a family member. They used Flock to help find her, when she could have been unwell. She was physically okay, which is great. But due to the political climate, this was really good clickbait.

Nothing about this is working as it should, but it is working as Flock designed. 

The Danger of Unchecked Surveillance

This case reveals the fundamental danger of allowing companies like Flock Safety to build massive, interconnected surveillance networks that can be searched across state lines with minimal oversight. When a single search query can access more than 83,000 cameras spanning almost the entire country, the potential for abuse is staggering, particularly when weaponized against people seeking reproductive healthcare. 

The searches in this case may have violated laws in states like Washington and Illinois, where restrictions exist specifically to prevent this kind of surveillance overreach. But those protections mean nothing when a Texas deputy can access cameras in those states with a few keystrokes, without external review that the search is legal and legitimate under local law. In this case, external agencies should have seen the word “abortion” and questioned the search, but the next time an officer is investigating such a case, they may use a more vague or misleading term to justify the search. In fact, it’s possible it has already happened. 

ALPRs were marketed to the public as tools to find stolen cars and locate missing persons. Instead, they’ve become a dragnet that allows law enforcement to track anyone, anywhere, for any reason—including investigating people’s healthcare decisions. This case makes clear that neither the companies profiting from this technology nor the agencies deploying it can be trusted to tell the full story about how it’s being used.

States must ban law enforcement from using ALPRs to investigate healthcare decisions and prohibit sharing data across state lines. Local governments may try remedies like reducing data retention period to minutes instead of weeks or months—but, really, ending their ALPR programs altogether is the strongest way to protect their most vulnerable constituents. Without these safeguards, every license plate scan becomes a potential weapon against a person seeking healthcare.

Republished from the EFF’s Deeplinks blog.

Posted on Techdirt - 16 July 2025 @ 03:42pm

Axon’s Draft One Is Designed To Defy Transparency

Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.

Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.

You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here

Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.

For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system.  Now we’ve concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you’re a police chief or an independent researcher, because Axon designed it that way. 

Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One’s report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they’re done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.

Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI. 

One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.

But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used. 

So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won’t indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk. 

The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon’s first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports. 

“We love having new toys until the public gets wind of them,” the administrator wrote.

No Record of Who Wrote What

The first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like: 

  • Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible? 
  • How often are officers finding and correcting errors made by the AI, and are there patterns to these errors? 
  • If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer? 
  • Is the AI overstepping in its interpretation of the audio? If a report says, “the subject made a threatening gesture,” was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says “yeah” through a conversation as a verbal acknowledgement that they’re listening to what the officer says, is that interpreted as an agreement or a confession?

Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer’s own recollection. If an officer generates a Draft One report multiple, there’s no way to tell whether the AI interprets the audio differently each time.

Axon is open about not maintaining these records, at least when it markets directly to law enforcement.

In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added): 

So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”

To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because “the last thing” they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit). 

Following up on the same question, Axon’s Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn’t be required to save every draft of a police report as they’re re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.

The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy? 

It also appears that Draft One isn’t simply hewing to longestablished norms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department’s Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It’s more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon’s engineers had yet to finalize the feature at the time it was rolled out. 

One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the “guardrails” that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.

To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it’s used. But Axon has intentionally made this difficult. 

What the Audit Trail Actually Looks Like 

You may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means. 

The first thing to note is that, based on our review of the documentation, there appears to be  no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we’ll get to that in a minute). 

This is disappointing because, without this information, it’s near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often. 

Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:

  1. A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
  2.  A log of an individual officer/user’s basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings. 

This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs. 

An example of Draft One usage in an audit log.

An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time. 

But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as “I acknowledge this report was generated from a digital recording using Draft One by Axon.” If so, then an administrator can use “Draft One” as a keyword search to find relevant reports.

Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon’s most promoted clients, the Lafayette Police Department in Indiana, told us: 

“Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed.”

Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff’s Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.

They told us: “We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe.”

We have requested further clarification from Axon, but they have yet to respond. 

However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn’t available to the police department itself. 

In response to a request from Politico’s Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports. 

An Axon representative responded: “Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy.”

But then, Axon followed up: “We track which reports use Draft One internally so I exported the data.” Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future. 


What is Being Done About Draft One

The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill,  and any law enforcement usage would be unlawful.

Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savings, with others agencies extolling its virtues (although their data also shows that results vary even within the department).

In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It’s like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards. 

Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft. 

In King County, Washington, which includes Seattle, the district attorney’s office has been clear in their instructions: police should not use AI to write police reports. Their memo says

We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now… AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.

We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product. 

Conclusion

Police should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system. 

EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.  

Originally published to the EFF’s Deeplinks blog.

Posted on Techdirt - 20 January 2015 @ 02:34pm

How Seizing Assets Leads To More Surveillance… And Then More Seized Assets… And Then More Surveillance… And Then…

Note: The bulk of the research in this post was compiled prior to Attorney General Eric Holder’s surprise announcement that he is curtailing the federal equitable sharing program. The post may be updated as further ramifications of the policy decision become clear.

“You follow drugs, you get drug addicts and drug dealers. But you start to follow the money, and you don’t know where the f*** it’s gonna take you.”

This oft-cited wisdom comes from Detective Lester Freamon, a character in the classic HBO series The Wire, which tracked how an elite task force of (fictional) Baltimore cops used electronic surveillance to bring down criminal networks. But, the sentiment is ironic to a fault: if you keep following the money, it might take you right back to the police.

Asset forfeiture has long been a topic of controversy in law enforcement. Cops and prosecutors have had the power to seize property and cash from suspects before anyone has actually been convicted of a crime (usually narcotics-related). Then these law enforcement agencies have plugged a portion of that money (and money derived from auctioning of property) into their own budgets, allowing them to spend in ways that possibly would not have passed scrutiny during the formal appropriations process.

Critics note that asset forfeiture creates a perverse incentive for policing priorities: the more assets cops seize, the more money they get to spend. Satirist John Oliver characterized the practice as akin to “legalized robbery by law enforcement” in a must-watch segment on his show Last Week Tonight. News organizations, including New York Times, the New Yorker and the Washington Free Beacon have recently outlined abuses of the system.

The good news is that, on Friday, the Washington Post reported that Attorney General Eric Holder is taking steps to rein in the federal version of the program, barring state and local law enforcement agencies from “using federal law to seize cash, cars and other property without evidence that a crime occurred.” 

Last year, the Washington Post‘s investigative team used the Freedom of Information Act to liberate hundreds of thousands of documents associated with federal asset forfeiture, including the entire collection of annual spending disclosures (“Equitable Sharing Agreements”) filed with the U.S. Department of Justice by each individual law enforcement agency and task force across the country that receives these funds. The documents reveal a wide variety of spending, from using seized assets to pay for new vehicles and helicopters, drug “buy” money, payments to confidential informants, travel expenses, law enforcement equipment, and rewards for police, such as “challenge coins.

An examination of these documents reveal the connection between seized assets and electronic surveillance across the country.  It many ways, it has been a circular, self-sustaining system: asset forfeiture helps law enforcement agencies pay for electronic surveillance, which allows cops to seize more money to pay for electronic surveillance.

According to data compiled by the Washington Post for the years 2008-2013, law enforcement agencies around the country collectively spent $121 million of federal asset forfeiture funds on electronic surveillance equipment, an annual nationwide average  $20.2 million.  

The forms do not clearly define “electronic surveillance,” but it typically includes the type of equipment used in wiretaps. The amount of seized assets spent on electronic surveillance could potentially be much higher, since law enforcement agencies can categorize staff time spent on surveillance in other categories. Sometimes agencies weren’t sure how to categorize certain technologies, such as automatic license plate readers and GPS tracking devices, so they reported them separately under other categories. To put it another way: these numbers are just the chunk of the iceberg viewable through public records.

The data sets are enormous, so let’s drill down on California. Brace yourself, it’s about to get mathy.

How Wiretaps Are Used to Seize Funds

California law enforcement agencies executed 2,078 wiretap orders between 2011 and 2013, according to the California Electronic Interceptions Reports, an annual accounting of electronic surveillance compiled by the California Attorney General’s Office.  These reports show that these agencies seize hundreds of millions of dollars each year in wiretap-related criminal investigations, usually involving narcotics or gang activities, and frequently in partnership with federal agencies.

In Los Angeles County, law enforcement agencies conducted 515 wiretap operations over that three-year period, leading to the seizure of at least $25 million in assets. The California Electronic Interceptions Report provides details on the outcome of every single wiretap, which typically include the number of communications captured, the number of individuals affected by the wiretap, and any arrests made or drugs or assets seized. For example, you might see that one 2013 LA wiretap intercepted 9,273 communications involving 67 people, resulting in a single arrest and the seizure of $427,000 in alleged narcotics proceeds. Or that in 2012, LA authorities captured 6,176 communications involving 245 people, resulting in the seizure of $440,000 in alleged drug money. 

The Cost of Wiretaps

Electronic surveillance isn’t cheap. Between 2011 and 2013, the average cost to execute a wiretap order in California was $40,594, which included $36,807 for staff time and $3,787 for equipment-related expenses. 

From a bird’s eye view, California law enforcement agencies collectively spent $84 million on electronic interceptions during that period, an average of $28 million per year. Of that, staff time spent on electronic surveillance cost California agencies $76 million ($25 million annually) and equipment-related expenses cost $7.9 million  ($2.6 million annually).

In Los Angeles County alone, law enforcement agencies spent $20.3 million between 2011 and 2013, including $2.1 million on wiretap equipment. 

How Seized Assets Were Turned into Electronic Surveillance

When local law enforcement agencies participated in federal investigations, the federal government paid them back by divvying out a portion of the proceeds from the seizures.  These agencies included police department, sheriff offices, and district attorney offices, as well as investigative task forces that span multiple jurisdictions. These agencies were required to broadly report how they spent the money in a variety of categories, including electronic surveillance, on an annual basis.

Between 2011 and 2013, law enforcement agencies in California spent a total of $13.6 million in funds from the federal asset forfeiture program on electronic surveillance equipment, a statewide average of $4.5 million per year. 

To give a sense of scale: that was enough to cover the cost of wiretap equipment (including installation fees, supplies, and equipment) for the entire state of California, with change left over. 

To look at it another way, that’s enough to pay for equipment in more than 3,500 wiretaps, far more than these agencies actually conducted. This could indicate that either agencies may have bought more equipment than they needed to carry out these wiretaps or that they may have spent significant portions of the money on surveillance that doesn’t require a wiretap order.

Los Angeles County is made up of dozens of local law enforcement agencies and task forces, but two in particular consistently rose to the top of electronic surveillance spending: the Los Angeles County Sheriff’s Office and Los Angeles Interagency Metropolitan Police Apprehension Crime Team (LA IMPACT), a cross-jurisdictional task force.

The Los Angeles County Sheriff’s Department dug deep into seized federal asset forfeiture funds to run its electronic surveillance operations. Between 2008 and 2014, the LA Sheriff received a total of $47.3 million from the federal program and spent roughly $4 million of that on electronic surveillance equipment. Meanwhile, LA IMPACT received approximately $30 million in asset forfeiture funds over that period, two thirds of which it transferred to other law enforcement agencies. Of the remaining money, about $620,000 went towards electronic surveillance.

That’s how it works (or worked on the federal level before Holder’s announcement): police spend money on electronic surveillance, which leads to the seizure funds from suspected criminals, and then that money is channeled back to police to use on more electronic surveillance.  

Holder’s announcement could have a significant impact on how law enforcement agencies fund electronic surveillance. However, it’s important to remember that the next administration’s attorney general could easily reverse this policy decision. Further, many states also have their own asset forfeiture programs, so a whole second layer of funding remains on the state level.

The Washington Post has released its giant cache of Equitable Sharing Agreements from thousands of local law enforcement agencies around the country. We urge you to dig in, find your local cops, identify out how they’ve spend this money, and let the world know what you find.

Major thanks goes out to Washington Post data editor Steven Rich and his colleagues for freeing this data, making it available to the public, and helping us wrap our heads around the spreadsheets.

Reposted from the EFF Deeplinks blog

More posts from Dave Maass >>