Amazon Ring’s upcoming face recognition tool has the potential to violate the privacy rights of millions of people and could result in Amazon breaking state biometric privacy laws.
Ring plans to introduce a feature to its home surveillance cameras called “Familiar Faces,” to identify specific people who come into view of the camera. When turned on, the feature will scan the faces of all people who approach the camera to try and find a match with a list of pre-saved faces. This will include many people who have not consented to a face scan, including friends and family, political canvassers, postal workers, delivery drivers, children selling cookies, or maybe even some people passing on the sidewalk.
Many biometric privacy laws across the country are clear: Companies need your affirmative consent before running face recognition on you. In at least one state, ordinary people with the help of attorneys can challenge Amazon’s data collection. Where not possible, state privacy regulators should step in.
Sen. Ed Markey (D-Mass.) has already called on Amazon to abandon its plans and sent the company a list of questions. Ring spokesperson Emma Daniels answered written questions posed by EFF, which can be viewed here.
What is Ring’s “Familiar Faces”?
Amazon describes “Familiar Faces” as a tool that “intelligently recognizes familiar people.” It says this tool will provide camera owners with “personalized context of who is detected, eliminating guesswork and making it effortless to find and review important moments involving specific familiar people.” Amazon plans to release the feature in December.
The feature will allow camera owners to tag particular people so Ring cameras can automatically recognize them in the future. In order for Amazon to recognize particular people, it will need to perform face recognition on every person that steps in front of the camera. Even if a camera owner does not tag a particular face, Amazon says it may retain that biometric information for up to six months. Amazon said it does not currently use the biometric data for “model training or algorithmic purposes.”
In order to biometrically identify you, a company typically will take your image and extract a faceprint by taking tiny measurements of your face and converting that into a series of numbers that is saved for later. When you step in front of a camera again, the company takes a new faceprint and compares it to a list of previous prints to find a match. Other forms of biometric tracking can be done with a scan of your fingertip, eyeball, or even your particular gait.
Amazon has told reporters that the feature will be off by default and that it would be unavailable in certain jurisdictions with the most active biometric privacy enforcement—including the states of Illinois and Texas, and the city of Portland, Oregon. The company would not promise that this feature will remain off by default in the future.
Why is This a Privacy Problem?
Your biometric data, such as your faceprint, are some of the most sensitive pieces of data that a company can collect. Associated risks include mass surveillance, data breach, and discrimination.
Today’s feature to recognize your friend at your front door can easily be repurposed tomorrow for mass surveillance. Ring’s close partnership with police amplifies that threat. For example, in a city dense with face recognition cameras, the entirety of a person’s movements could be tracked with the click of a button, or all people could be identified at a particular location. A recent and unrelated private-public partnership in New Orleans unfortunately shows that mass surveillance through face recognition is not some far flung concern.
Amazon has already announced a related tool called “search party” that can identify and track lost dogs using neighbors’ cameras. A tool like this could be repurposed for law enforcement to track people. At least for now, Amazon says it does not have the technical capability to comply with law enforcement demanding a list of all cameras in which a person has been identified. Though, it complies with other law enforcement demands.
In addition, data breaches are a perpetual concern with any data collection. Biometrics magnify that risk because your face cannot be reset, unlike a password or credit card number. Amazon says it processes and stores biometrics collected by Ring cameras on its own servers, and that it uses comprehensive security measure to protect the data.
Face recognition has also been shown to have higher error rates with certain groups—most prominently with dark-skinned women. Similar technology has also been used to make questionable guesses about a person’s emotions, age, and gender.
Will Ring’s “Familiar Faces” Violate State Biometric Laws?
Any Ring collection of biometric information in states that require opt-in consent poses huge legal risk for the company. Amazon already told reporters that the feature will not be available in Illinois and Texas—strongly suggesting its feature could not survive legal scrutiny there. The company said it is also avoiding Portland, Oregon, which has a biometric privacy law that similar companies have avoided.
Its “familiar faces” feature will necessarily require its cameras to collect a faceprint from of every person who comes into view of an enabled camera, to try and find a match. It is impossible for Amazon to obtain consent from everyone—especially people who do not own Ring cameras. It appears that Amazon will try to unload some consent requirements onto individual camera owners themselves. Amazon says it will provide in-app messages to customers, reminding them to comply with applicable laws. But Amazon—as a company itself collecting, processing, and storing this biometric data—could have its own consent obligations under numerous laws.
Many states aside from Illinois and Texas now protect biometric data. While the state has never enforced its law, Washington in 2017 passed a biometric privacy law. In 2023, the state passed an ever stronger law that protects biometric privacy, which allows individuals to sue on their own behalf. And at least 16 states have recently passed comprehensive privacy laws that often require companies to obtain opt-in consent for the collection of sensitive data, which typically includes biometric data. For example, in Colorado, a company that jointly with others determines the purpose and means of processing biometric data must obtain consent. Maryland goes farther, and such companies are essentially prohibited from collecting or processing biometric data from bystanders.
Many of these comprehensive laws have numerous loopholes and can only be enforced by state regulators—a glaring weakness facilitated in part by Amazon lobbyists.
Nonetheless, Ring’s new feature provides regulators a clear opportunity to step up to investigate, protect people’s privacy, and test the strength of their laws.
More than 80 law enforcement agencies across the United States have used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety automated license plate reader (ALPR) network, according to audit logs obtained and analyzed by the Electronic Frontier Foundation.
When police run a search through the Flock Safety network, which links thousands of ALPR systems, they are prompted to leave a reason and/or case number for the search. Between June 2024 and October 2025, cops performed hundreds of searches for license plates using terms such as “roma” and “g*psy,” and in many instances, without any mention of a suspected crime. Other uses include “g*psy vehicle,” “g*psy group,” “possible g*psy,” “roma traveler” and “g*psy ruse,” perpetuating systemic harm by demeaning individuals based on their race or ethnicity.
These queries were run through thousands of police departments’ systems—and it appears that none of these agencies flagged the searches as inappropriate.
These searches are, by definition, racist.
Word Choices and Flock Searches
We are using the terms “Roma” and “Romani people” as umbrella terms, recognizing that they represent different but related groups. Since 2020, the U.S. federal government has officially recognized “Anti-Roma Racism” as including behaviors such as “stereotyping Roma as persons who engage in criminal behavior” and using the slur “g*psy.” According to the U.S. Department of State, this language “leads to the treatment of Roma as an alleged alien group and associates them with a series of pejorative stereotypes and distorted images that represent a specific form of racism.”
Nevertheless, police officers have run hundreds of searches for license plates using the terms “roma” and “g*psy.” (Unlike the police ALPR queries we’ve uncovered, we substitute an asterisk for the Y to avoid repeating this racist slur). In many cases, these terms have been used on their own, with no mention of crime. In other cases, the terms have been used in contexts like “g*psy scam” and “roma burglary,” when ethnicity should have no relevance to how a crime is investigated or prosecuted.
A “g*psy scam” and “roma burglary” do not exist in criminal law separate from any other type of fraud or burglary. Several agencies contacted by EFF have since acknowledged the inappropriate use and expressed efforts to address the issue internally.
“The use of the term does not reflect the values or expected practices of our department,” a representative of the Palos Heights (IL) Police Department wrote to EFF after being confronted with two dozen searches involving the term “g*psy.” “We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language.”
Of course, the broader issue is that allowing “g*psy” or “Roma” as a reason for a search isn’t just offensive, it implies the criminalization an ethnic group. In fact, the Grand Prairie Police Department in Texas searched for “g*psy” six times while using Flock’s “Convoy” feature, which allows an agency to identify vehicles traveling together—in essence targeting an entire traveling community of Roma without specifying a crime.
At the bottom of this post is a list of agencies and the terms they used when searching the Flock system.
Anti-Roma Racism in an Age of Surveillance
Racism against Romani people has been a problem for centuries, with one of its most horrific manifestations during the Holocaust, when the Third Reich and its allies perpetuated genocide by murdering hundreds of thousands of Romani people and sterilizing thousands more. Despite efforts by the UN and EU to combat anti-Roma discrimination, this form of racism persists. As scholars Margareta Matache and Mary T. Bassett explain, it is perpetuated by modern American policing practices:
In recent years, police departments have set up task forces specialised in “G*psy crimes”, appointed “G*psy crime” detectives, and organised police training courses on “G*psy criminality”. The National Association of Bunco Investigators (NABI), an organisation of law enforcement professionals focusing on “non-traditional organised crime”, has even created a database of individuals arrested or suspected of criminal activity, which clearly marked those who were Roma.
Thus, it is no surprise that a 2020 Harvard University survey of Romani Americans found that 4 out of 10 respondents reported being subjected to racial profiling by police. This demonstrates the ongoing challenges they face due to systemic racism and biased policing.
Notably, many police agencies using surveillance technologies like ALPRs have adopted some sort of basic policy against biased policing or the use of these systems to target people based on race or ethnicity. But even when such policies are in place, an agency’s failure to enforce them allows these discriminatory practices to persist. These searches were also run through the systems of thousands of other police departments that may have their own policies and state laws that prohibit bias-based policing—yet none of those agencies appeared to have flagged the searches as inappropriate.
The Flock search data in question here shows that surveillance technology exacerbates racism, and even well-meaning policies to address bias can quickly fall apart without proper oversight and accountability.
Cops In Their Own Words
EFF reached out to a sample of the police departments that ran these searches. Here are five representative responses we received from police departments in Illinois, California, and Virginia. They do not inspire confidence.
1. Lake County Sheriff’s Office, IL
In June 2025, the Lake County Sheriff’s Office ran three searches for a dark colored pick-up truck, using the reason: “G*PSY Scam.” The search covered 1,233 networks, representing 14,467 different ALPR devices.
In response to EFF, a sheriff’s representative wrote via email:
“Thank you for reaching out and for bringing this to our attention. We certainly understand your concern regarding the use of that terminology, which we do not condone or support, and we want to assure you that we are looking into the matter.
Any sort of discriminatory practice is strictly prohibited at our organization. If you have the time to take a look at our commitment to the community and our strong relationship with the community, I firmly believe you will see discrimination is not tolerated and is quite frankly repudiated by those serving in our organization.
We appreciate you bringing this to our attention so we can look further into this and address it.”
2. Sacramento Police Department, CA
In May 2025, the Sacramento Police Department ran six searches using the term “g*psy.” The search covered 468 networks, representing 12,885 different ALPR devices.
In response to EFF, a police representative wrote:
“Thank you again for reaching out. We looked into the searches you mentioned and were able to confirm the entries. We’ve since reminded the team to be mindful about how they document investigative reasons. The entry reflected an investigative lead, not a disparaging reference.
We appreciate the chance to clarify.”
3. Palos Heights Police Department, IL
In September 2024, the Palos Heights Police Department ran more than two dozen searches using terms such as “g*psy vehicle,” “g*psy scam” and “g*psy concrete vehicle.” Most searches hit roughly 1,000 networks.
In response to EFF, a police representative said the searches were related to a singular criminal investigation into a vehicle involved in a “suspicious circumstance/fraudulent contracting incident” and is “not indicative of a general search based on racial or ethnic profiling.” However, the agency acknowledged the language was inappropriate:
“The use of the term does not reflect the values or expected practices of our department. We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language.
We appreciate your outreach on this matter and the opportunity to provide clarification.”
4. Irvine Police Department, CA
In February and May 2025, the Irvine Police Department ran eight searches using the term “roma” in the reason field. The searches covered 1,420 networks, representing 29,364 different ALPR devices.
In a call with EFF, an IPD representative explained that the cases were related to a series of organized thefts. However, they acknowledged the issue, saying, “I think it’s an opportunity for our agency to look at those entries and to use a case number or use a different term.”
5. Fairfax County Police Department, VA
Between December 2024 and April 2025, the Fairfax County Police Department ran more than 150 searches involving terms such as “g*psy case” and “roma crew burglaries.” Fairfax County PD continued to defend its use of this language.
In response to EFF, a police representative wrote:
“Thank you for your inquiry. When conducting searches in investigative databases, our detectives must use the exact case identifiers, terms, or names connected to a criminal investigation in order to properly retrieve information. These entries reflect terminology already tied to specific cases and investigative files from other agencies, not a bias or judgment about any group of people. The use of such identifiers does not reflect bias or discrimination and is not inconsistent with our Bias-Based Policing policy within our Human Relations General Order.”
A National Trend
Roma individuals and families are not the only ones being systematically and discriminatorily targeted by ALPR surveillance technologies. For example, Flock audit logs show agencies ran 400 more searches using terms targeting Traveller communities more generally, with a specific focus on Irish Travellers, often without any mention of a crime.
Across the country, these tools are enabling and amplifying racial profiling by embedding longstanding policing biases into surveillance technologies. For example, data from Oak Park, IL, show that 84% of drivers stopped in Flock-related traffic incidents were Black—despite Black people making up only 19% of the local population. ALPR systems are far from being neutral tools for public safety and are increasingly being used to fuel discriminatory policing practices against historically marginalized people.
The racially coded language in Flock’s logs mirrors long-standing patterns of discriminatory policing. Terms like “furtive movements,” “suspicious behavior,” and “high crime area” have always been cited by police to try to justify stops and searches of Black, Latine, and Native communities. These phrases might not appear in official logs because they’re embedded earlier in enforcement—in the traffic stop without clear cause, the undocumented stop-and-frisk, the intelligence bulletin flagging entire neighborhoods as suspect. They function invisibly until a body-worn camera, court filing, or audit brings them to light. Flock’s network didn’t create racial profiling; it industrialized it, turning deeply encoded and vague language into scalable surveillance that can search thousands of cameras across state lines.
The Path Forward
U.S. Sen. Ron Wyden, D-OR, recently recommended that local governments reevaluate their decisions to install Flock Safety in their communities. We agree, but we also understand that sometimes elected officials need to see the abuse with their own eyes first.
We know which agencies ran these racist searches, and they should be held accountable. But we also know that the vast majority of Flock Safety’s clients—thousands of police and sheriffs—also allowed those racist searches to run through their Flock Safety systems unchallenged.
Elected officials must act decisively to address the racist policing enabled by Flock’s infrastructure. First, they should demand a complete audit of all ALPR searches conducted in their jurisdiction and a review of search logs to determine (a) whether their police agencies participated in discriminatory policing and (b) what safeguards, if any, exist to prevent such abuse. Second, officials should institute immediate restrictions on data-sharing through Flock’s nationwide network. As demonstrated by California law, for example, police agencies should not be able to share their ALPR data with federal authorities or out-of-state agencies, thus eliminating a vehicle for discriminatory searches spreading across state lines.
Ultimately, elected officials must terminate Flock Safety contracts entirely. The evidence is now clear: audit logs and internal policies alone cannot prevent a surveillance system from becoming a tool for racist policing. The fundamental architecture of Flock—thousands of cameras feeding into a nationwide searchable network—makes discrimination inevitable when enforcement mechanisms fail.
As Sen. Wyden astutely explained, “local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities.”
Table Overview and Notes
The following table compiles terms used by agencies to describe the reasons for searching the Flock Safety ALPR database. In a small number of cases, we removed additional information such as case numbers, specific incident details, and officers’ names that were present in the reason field.
We removed one agency from the list due to the agency indicating that the word was a person’s name and not a reference to Romani people.
In general, we did not include searches that used the term “Romanian,” although many of those may also be indicative of anti-Roma bias. We also did not include uses of “traveler” or “Traveller” when it did not include a clear ethnic modifier; however, we believe many of those searches are likely relevant.
A text-based version of the spreadsheet is available here.
A San Francisco supervisor has proposed that police and other city agencies should have no financial consequences for breaking a landmark surveillance oversight law. In 2019, organizations from across the city worked together to help pass that law, which required law enforcement to get the approval of democratically elected officials before they bought and used new spying technologies. Bit by bit, the San Francisco Police Department and the Board of Supervisors have weakened that law—but one important feature of the law remained: if city officials are caught breaking this law, residents can sue to enforce it, and if they prevail they are entitled to attorney fees.
Now Supervisor Matt Dorsey believes that this important accountability feature is “incentivizing baseless but costly lawsuits that have already squandered hundreds of thousands of taxpayer dollars over bogus alleged violations of a law that has been an onerous mess since it was first enacted.”
Between 2010 and 2023, San Francisco had to spend roughly $70 million to settle civil suits brought against the SFPD for alleged misconduct ranging from shooting city residents to wrongfully firing whistleblowers. This is not “squandered” money; it is compensating people for injury. We are all governed by laws and are all expected to act accordingly—police are not exempt from consequences for using their power wrongfully. In the 21st century, this accountability must extend to using powerful surveillance technology responsibly.
The ability to sue a police department when they violate the law is called a “private right of action” and it is absolutely essential to enforcing the law. Government officials tasked with making other government officials turn square corners will rarely have sufficient resources to do the job alone, and often they will not want to blow the whistle on peers. But city residents empowered to bring a private right of action typically cannot do the job alone, either—they need a lawyer to represent them. So private rights of action provide for an attorney fee award to people who win these cases. This is a routine part of scores of public interest laws involving civil rights, labor safeguards, environmental protection, and more.
Without an enforcement mechanism to hold police accountable, many will just ignore the law. They’ve done it before. AB 481 is a California state law that requires police to get elected official approval before attempting to acquire military equipment, including drones. The SFPD knowingly ignored this law. If it had an enforcement mechanism, more police would follow the rules.
President Trump recently included San Francisco in a list of cities he would like the military to occupy. Law enforcement agencies across the country, either willingly or by compulsion, have been collaborating with federal agencies operating at the behest of the White House. So it would be best for cities to keep their co-optable surveillance infrastructure small, transparent, and accountable. With authoritarianism looming, now is not the time to make police less hard to control—especially considering SFPD has already disclosed surveillance data to Immigration and Customs Enforcement (ICE) in violation of California state law.
We’re calling on the Board of Supervisors to reject Supervisor Dorsey’s proposal. If police want to avoid being sued and forced to pay the prevailing party’s attorney fees, they should avoid breaking the laws that govern police surveillance in the city.
In his ruling, U.S. District Judge Charles Breyer said that National Guard troops in Los Angeles had received improper training on the legal scope of their authority under federal law. He ruled that the president’s order for the troops to engage in “domestic military law enforcement” violated the Posse Comitatus Act, which – with limited exceptions – bars the use of the military in civilian law enforcement.
While he did not require the remaining soldiers to leave Los Angeles, Breyer called on the administration to refrain from using them “to execute laws.”
The Los Angeles case, President Donald Trump’s deployment of National Guard troops to fight crime in Washington, D.C., and his recent vow to send the Guard to Chicago and Baltimore to fight crime blur practical and philosophical lines erected in both law and longtime custom between the military and the police.
As a policing scholar and former FBI special agent, I believe the plan to continue using National Guard troops to reduce crime in cities such as Chicago and Baltimore violates the legal prohibition against domestic military law enforcement.
Limited law enforcement function
State and local police training focus on law enforcement and maintaining order. Community policing, which is a collaboration between police and the community to solve problems, and the use-of-force continuum – the escalating series of appropriate actions an officer may take to resolve a situation – also form part of training.
The initial 10-week training program for National Guard recruits includes learning skills such as the use of M16 military assault rifles and grenade launchers. It also includes learning guerrilla warfare tactics, as well as tactics for neutralizing improvised explosive devices while engaging in military operations. While valuable in a military setting, such activities aren’t part of domestic policing and law enforcement.
While the National Guard has, by law, a limited law enforcement function in times of domestic emergencies, it’s a unique part of the U.S. military that typically responds – at the request of a state’s governor – to natural disasters and extreme violence.
But sending soldiers who are not well versed in policing increases the likelihood of mistakes. One of the most well-known examples is the Kent State shootings on May 4, 1970, when National Guardsmen sent to the university by Ohio’s governor opened fire and killed four unarmed students during an anti-war protest on campus.
Thousands of National Guard troops were sent to multiple states at the request of state governors following Hurricane Sandy in 2012. Among other tasks, President Barack Obama’s administration directed the Department of Defense to support FEMA’s efforts to restore power to thousands of homes.
The last time a president bypassed a state’s governor in sending the National Guard to quell civil unrest was in Selma, Alabama, in 1965. President Lyndon B. Johnson deployed the National Guard to protect civil rights protesters without the cooperation of Alabama Gov. George Wallace, a prominent segregationist.
Trump is changing this precedent by sending National Guard troops to Los Angeles, despite the fact that Gov. Gavin Newsom neither refused to follow federal law nor requested military support. In June 2025, Trump overrode Newsom and sent Guard troops to shield federal agents with Immigration and Customs Enforcement from political protests.
The decision to send federal troops to a political protest in Los Angeles has raised core legal questions. The First Amendment’s protection of the right to political protest is a pillar of U.S. jurisprudence.
‘Federalizing’ the Guard
The governed have a right to hold the government accountable and ensure that the government’s power reflects the consent of the governed.
The right to protest, of course, does not extend to criminal behavior. But the use of military personnel raises a pressing question: Is the president justified in sending military personnel to address pockets of criminality, instead of relying on state or local police?
One of a president’s legal avenues is to use a federal statute to do what’s called “federalizing” the National Guard. This means troops are temporarily transitioned from state to federal military control.
What is unique about the deployment in California is that Newsom objected to Trump’s decision to federalize troops. California in June 2025 sued the Trump administration, arguing the president unlawfully bypassed the governor when he federalized the National Guard.
On Sept. 4, 2025, Washington, D.C., sued the Trump administration on similar grounds. The lawsuit follows Trump’s decision in August to deploy hundreds of National Guard troops to police the capital.
For the president to legally take control of and deploy the California National Guard under federal statutes, it was necessary for the criminality in Los Angeles to rise to a “rebellion” against the U.S.
More generally, the president is prohibited from using military force – including the Marines – against civilians in pursuit of normal law-enforcement goals. This bedrock principle is based on the Posse Comitatus Act of 1878 and permits only rare exceptions, as stipulated by the Insurrection Act of 1807. This act empowers the president to deploy the U.S. military to states in circumstances relating to the suppression of an insurrection.
In addition to the practical differences between the military and the police, there are philosophical differences derived from core principles of federalism, which refers to the division of power between the national and state governments.
In the United States, police power is derived from the 10th Amendment, which gives states the rights and powers “not delegated to the United States.” It is the states that have the power to establish and enforce laws protecting the welfare, safety and health of the public.
The use of military personnel in domestic affairs is limited by deeply entrenched policy and legal frameworks.
The deployment of National Guard troops for routine crime fighting in cities such as Los Angeles and Washington, and the proposed deployment of those troops to Chicago and Baltimore, highlights the erosion of both practical and philosophical constraints on the president and the vast federal power the president wields.
When you read about Adam Raine’s suicide and ChatGPT’s role in helping him plan his death, the immediate reaction is obvious and understandable: something must be done. OpenAI should be held responsible. This cannot happen again.
Those instincts are human and reasonable. The horrifying details in the NY Times and the family’s lawsuit paint a picture of a company that failed to protect a vulnerable young man when its AI offered help with specific suicide methods and encouragement.
But here’s what happens when those entirely reasonable demands for accountability get translated into corporate policy: OpenAI didn’t just improve their safety protocols—they announced plans to spy on user conversations and report them to law enforcement. It’s a perfect example of how demands for liability from AI companies can backfire spectacularly, creating exactly the kind of surveillance dystopia that plenty of people have long warned about.
There are plenty of questions about how liability should be handled with generative AI tools, and while I understand the concerns about potential harms, we need to think carefully about whether the “solutions” we’re demanding will actually make things better—or just create new problems that hurt everyone.
The specific case itself is more nuanced than the initial headlines suggest. Initially, ChatGPT responded to Adam’s suicidal thoughts by trying to reassure him, but once he decided he wished to end his life, ChatGPT was willing to help there as well:
Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.
But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.
There’s a lot more in the article and even more in the lawsuit his family filed against OpenAI in a state court in California.
Almost everyone I saw responding to this initially said that OpenAI should be liable and responsible for this young man’s death. And I understand that instinct. It feels conceptually right. The chats are somewhat horrifying as you read them, especially because we know how the story ends.
It’s also not that difficult to understand how this happened. These AI chatbots are designed to be “helpful,” sometimes to a fault—but it mostly determines “helpfulness” as doing what the user requests, which sometimes may not actually be that helpful to that individual. So if you ask it questions, it tries to be helpful. From the released transcripts, you can tell that ChatGPT obviously has built in some guardrails regarding suicidal ideation, in that it did repeatedly suggest Adam get professional help. But when he started asking more specific questions that were less directly or obviously about suicide to a bot (though a human might be more likely to recognize that), it still tried to help.
So, take this part:
ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help. At the end of March, after Adam attempted death by hanging for the first time, he uploaded a photo of his neck, raw from the noose, to ChatGPT.
Absolutely horrifying in context which all of us reading that know. But ChatGPT doesn’t know the context. It just knows that someone is asking if someone will notice the mark on his neck. It’s being “helpful” and answering the question.
But it’s not human. It doesn’t process things like a human does. It’s just trying to be helpful by responding to the prompt it was given.
The public response was predictable and understandable: OpenAI should be held responsible and must prevent this from happening again. But that leaves open what that actually means in practice. Unfortunately, we can already see how those entirely reasonable demands translate into corporate policy.
OpenAI’s actual response to the lawsuit and public outrage? Announcing plans for much greater surveillance and snitching on ChatGPT chats. This is exactly the kind of “solution” that liability regimes consistently produce: more surveillance, more snitching, and less privacy for everyone.
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts.If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.
There are, obviously, some times when you could see it being helpful if someone referred dangerous activities to law enforcement, but there are also so many times when it can be actively more harmful. Including in the situations where someone is looking to take their own life. There’s a reason the term “suicide by cop” exists. Will random people working for OpenAI know the difference?
But the surveillance problem is just the symptom. The deeper issue is how liability frameworks around suicide consistently create perverse incentives that don’t actually help anyone.
It is tempting to try to blame others when someone dies by suicide. We’ve seen plenty of such cases and claims over the years, including the infamous Lori Drew case from years ago. And we’ve discussed why punishing people based on others’ death by suicide is a very dangerous path.
First, it gives excess power to those who are considering death by suicide, as they can use it to get “revenge” on someone if our society starts blaming others legally. Second, it actually takes away the concept of agency from those who (tragically and unfortunately) choose to end their own life by such means. In an ideal world, we’d have proper mental health resources to help people, but there are always going to be some people determined to take their own life.
If we are constantly looking to place blame on a third party, that’s almost always going to lead to bad results. Even in this case, we see that when ChatGPT nudged Adam towards getting help, he worked out ways to change the context of the conversation to get him closer to his own goal. We need to recognize that the decision to take one’s own life via suicide is an individual’s decision that they are making. Blaming third parties suggests that the individual themselves had no agency at all and that’s also a very dangerous path.
For example, as I’ve mentioned before in these discussions, in high school I had a friend who died by suicide. It certainly appeared to happen in response to the end of a romantic relationship. The former romantic partner in that case was deeply traumatized as well (the method of suicide was designed to traumatize that individual). But if we open up the idea that we can blame someone else for “causing” a death by suicide, someone might have thought to sue that former romantic partner as well, arguing that their recent breakup “caused” the death.
This does not seem like a fruitful path for anyone to go down. It just becomes an exercise in lashing out at many others who somehow failed to stop an individual from doing what they were ultimately determined to do, even if they did not know or believe what that person would eventually do.
The rush to impose liability on AI companies also runs headlong into First Amendment problems. Even if you could somehow hold OpenAI responsible for Adam’s death, it’s unclear what legal violation they actually committed. The company did try to push him towards help—he steered the conversation away from that.
But some are now arguing that any AI assistance with suicide methods should be illegal. That path leads to the same surveillance dead end, just through criminal law instead of civil liability. There are plenty of books that one could read that a motivated person could use to learn how to end their own life. Should that be a crime? Would we ban books that mention the details of certain methods of suicide?
Already we have precedents that suggest the First Amendment would not allow that. I’ve mentioned it many times in the past, but in Winter vs. GP Putnam’s Sons, it was found that the publisher of an encyclopedia of mushrooms wasn’t liable for people who ate poisonous mushrooms that the book said were safe, because the publisher itself didn’t have actual knowledge that those mushrooms were poisonous. Or there’s the case of Smith v. Linn, in which the publisher of an insanely dangerous diet was not held liable, on First Amendment grounds, for people following the diet, leading to their own death.
You can argue that those and a bunch of similar cases were decided incorrectly, but it would only lead to an absolute mess. Any time someone dies, there would be a rush of lawyers looking for any company to blame. Did they read a book that mentioned suicide? Did they watch a YouTube video or spend time on a Wikipedia page?
We need to recognize that people themselves have agency, and this rush to act as though everyone is a mindless bot controlled by the computer systems they use leads us nowhere good. Indeed, as we’re seeing with this new surveillance and snitch effort by OpenAI, it can actually lead to an even more dangerous world for nearly all users.
The Adam Raine case is a tragedy that demands our attention and empathy. But it’s also a perfect case study in how our instinct to “hold someone accountable” can create solutions that are worse than the original problem.
OpenAI’s response—more surveillance, more snitching to law enforcement—is exactly what happens when we demand corporate liability without thinking through the incentives we’re creating. Companies don’t magically develop better judgment or more humane policies when faced with lawsuits. They develop more ways to shift risk and monitor users.
Want to prevent future tragedies? The answer isn’t giving AI companies more reasons to spy on us and report us to authorities. It’s investing in actual mental health resources, destigmatizing help-seeking, and, yes, accepting that we live in a world where people have agency—including the tragic agency to make choices we wish they wouldn’t make.
The surveillance state we’re building, one panicked corporate liability case at a time, won’t save the next Adam Raine. But it will make all of us less free.
One afternoon in mid-September, a group of middle school girls in rural East Tennessee decided to film a TikTok video while waiting to begin cheerleading practice.
In the 45-second video posted later that day, one girl enters the classroom holding a cellphone. “Put your hands up,” she says, while a classmate flickers the lights on and off. As the camera pans across the classroom, several girls dramatically fall back on a desk or the floor and lie motionless, pretending they were killed.
When another student enters and surveys the bodies on the ground in poorly feigned shock, few manage to suppress their giggles. Throughout the video, which ProPublica obtained, a line of text reads: “To be continued……”
Penny Jackson’s 11-year-old granddaughter was one of the South Greene Middle School cheerleaders who played dead. She said the co-captains told her what to do and she did it, unaware of how it would be used. The next day, she was horrified when the police came to school to question her and her teammates.
By the end of the day, the Greene County Sheriff’s Department charged her and 15 other middle school cheerleaders with disorderly conduct for making and posting the video. Standing outside the school’s brick facade, Lt. Teddy Lawing said in a press conference that the girls had to be “held accountable through the court system” to show that “this type of activity is not warranted.” The sheriff’s office did not respond to ProPublica’s questions about the incident.
Widespread fear of school shootings is colliding with algorithms that accelerate the spread of the most outrageous messages to cause chaos across the country. Social videos, memes and retweets are becoming fodder for criminal charges in an era of heightened responses to student threats. Authorities say harsh punishment is crucial to deter students from making threatening posts that multiply rapidly and obscure their original source.
In many cases, especially in Tennessee, police are charging students for jokes and misinterpretations, drawing criticism from families and school violence prevention experts who believe a measured approach is more appropriate. Students are learning the hard way that they can’t control where their social media messages travel. In central Tennessee last fall, a 16-year-old privately shared a video he created using artificial intelligence, and a friend forwarded it to others on Snapchat. The 16-year-old was expelled and charged with threatening mass violence, even though his school acknowledged the video was intended as a private joke.
Other students have been charged with felonies for resharing posts they didn’t create. As ProPublica wrote in May, a 12-year-old in Nashville was arrested and expelled this year for sharing a screenshot of threatening texts on Instagram. He told school officials he was attempting to warn others and wanted to “feel heroic.”
In Greene County, the cheerleaders’ video sent waves through the small rural community, especially since it was posted several days after the fatal Apalachee High School shooting one state away. The Georgia incident had spawned thousands of false threats looping through social media feeds across the country. Lawing told ProPublica and WPLN at the time that his officers had fielded about a dozen social media threats within a week and struggled to investigate them. “We couldn’t really track back to any particular person,” he said.
But the cheerleaders’ video, with their faces clearly visible, was easy to trace.
Jackson understands that the video was in “very poor taste,” but she believes the police overreacted and traumatized her granddaughter in the process. “I think they blew it completely out of the water,” she said. “To me, it wasn’t serious enough to do that, to go to court.”
That perspective is shared by Makenzie Perkins, the threat assessment supervisor of Collierville Schools, outside of Memphis. She is helping her school district chart a different path in managing alleged social media threats. Perkins has sought specific training on how to sort out credible threats online from thoughtless reposts, allowing her to focus on students who pose real danger instead of punishing everyone.
The charges in Greene County, she said, did not serve a real purpose and indicate a lack of understanding about how to handle these incidents. “You’re never going to suspend, expel or charge your way out of targeted mass violence,” she said. “Did those charges make that school safer? No.”
When 16-year-old D.C. saw an advertisement for an AI video app last October, he eagerly downloaded it and began roasting his friends. In one video he created, his friend stood in the Lincoln County High School cafeteria, his mouth and eyes moving unnaturally as he threatened to shoot up the school and bring a bomb in his backpack. (We are using D.C.’s initials and his dad’s middle name to protect their privacy, because D.C. is a minor.)
D.C. sent it to a private Snapchat group of about 10 friends, hoping they would find it hilarious. After all, they had all teased this friend about his dark clothes and quiet nature. But the friend did not think it was funny. That evening, D.C. showed the video to his dad, Alan, who immediately made him delete it as well as the app. “I explained how it could be misinterpreted, how inappropriate it was in today’s climate,” Alan recalled to ProPublica.
It was too late. One student in the chat had already copied D.C.’s video and sent it to other students on Snapchat, where it began to spread, severed from its initial context.
That evening, a parent reported the video to school officials, who called in local police to do an investigation. D.C. begged his dad to take him to the police station that night, worried the friend in the video would get in trouble — but Alan thought it could wait until morning.
The next day, D.C. rushed to school administrators to explain and apologize. According to Alan, administrators told D.C. they “understood it was a dumb mistake,” uncharacteristic for the straight-A student with no history of disciplinary issues. In a press release, Lincoln County High School said administrators were “made aware of a prank threat that was intended as a joke between friends.”
But later that day, D.C. was expelled from school for a year and charged with a felony for making a threat of mass violence. As an explanation, the sheriff’s deputy wrote in the affidavit, “Above student did create and distribute a video on social media threatening to shoot the school and bring a bomb.”
During a subsequent hearing where D.C. appealed his school expulsion, Lincoln County Schools administrators described their initial panic when seeing the video. Alan shared an audio recording of the hearing with ProPublica. Officials didn’t know that the video was generated by AI until the school counselor saw a small logo in the corner. “Everybody was on pins and needles,” the counselor said at the hearing. “What are we going to do to protect the kids or keep everybody calm the next day if it gets out?” The school district declined to respond to ProPublica’s questions about how officials handled the incident, even though Alan signed a privacy waiver giving them permission to do so.
Alan watched D.C. wither after his expulsion: His girlfriend broke up with him, and some of his friends began to avoid him. D.C. lay awake at night looking through text messages he sent years ago, terrified someone decades later would find something that could ruin his life. “If they are punishing him for creating the image, when does his liability expire?” Alan wondered. “If it’s shared again a year from now, will he be expelled again?”
Alan, a teacher in the school district, coped by voraciously reading court cases and news articles that could shed light on what was happening to his son. He stumbled on a case hundreds of miles north in Pennsylvania, the facts of which were eerily similar to D.C.’s.
In April 2018, two kids, J.S. and his friend, messaged back and forth mocking another student by suggesting he looked like a school shooter. (The court record uses J.S. instead of his full name to protect the student’s anonymity.) J.S. created two memes and sent them to his friend in a private Snapchat conversation. His friend shared the memes publicly on Snapchat, where they were seen by 20 to 40 other students. School administrators permanently expelled J.S., so he and his parents sued the school.
In 2021, after a series of appeals, Pennsylvania’s highest court ruled in J.S.’s favor. While the memes were “mean-spirited, sophomoric, inartful, misguided, and crude,” the state Supreme Court justices wrote in their opinion, they were “plainly not intended to threaten Student One, Student Two, or any other person.”
The justices also shared their sympathy with the challenges schools faced in providing a “safe and quality educational experience” in the modern age. “We recognize that this charge is compounded by technological developments such as social media, which transcend the geographic boundaries of the school. It is a thankless task for which we are all indebted.”
After multiple disciplinary appeals, D.C.’s school upheld the decision to keep him out of school for a year. His parents found a private school that agreed to let him enroll, and he slowly emerged from his depression to continue his straight-A streak there. His charge in court was dismissed in December after he wrote a 500-word essay for the judge on the dangers of social media, according to Alan.
Thinking back on the video months later, D.C. explained that jokes about school violence are common among his classmates. “We try to make fun of it so that it doesn’t seem as serious or like it could really happen,” he said. “It’s just so widespread that we’re all desensitized to it.”
He wonders if letting him back to school would have been more effective in deterring future hoax threats. “I could have gone back to school and said, ‘You know, we can’t make jokes like that because you can get in big trouble for it,’” he said. “I just disappeared for everyone at that school.”
When a school district came across an alarming post on Snapchat in 2023, officials reached out to Safer Schools Together, an organization that helps educators handle school threats. In the post, a pistol flanked by two assault rifles lay on a rumpled white bedsheet. The text overlaid on the photo read, “I’m shooting up central I’m tired of getting picked on everyone is dying tomorrow.”
Steven MacDonald, training manager and development director for Safer Schools Together, recounted this story in a virtual tutorial posted last year on using online tools to trace and manage social media threats. He asked the school officials watching his tutorial what they would do next. “How do we figure out if this is really our student’s bedroom?”
According to MacDonald, it took his organization’s staff only a minute to put the text in quotation marks and run it through Google. A single local news article popped up showing that two kids had been arrested for sharing this exact Snapchat post in Columbia, Tennessee — far from the original district.
“We were able to reach out and respond and say, ‘You know what, this is not targeting your district,’” MacDonald said. Administrators were reassured there was a low likelihood of immediate violence, and they could focus on finding out who was recirculating the old threat and why.
In the training video, MacDonald reviewed skills that, until recently, have been more relevant to police investigators than school principals: How to reverse image search photos of guns to determine whether a post contains a stock image. How to use Snapchat to find contact names for unknown phone numbers. How to analyze the language in the social media posts of a high-risk student.
“We know that why you’re here is because of the increase and the sheer volume of these threats that you may have seen circulated, the non-credible threats that might have even ended up in your districts,” he said. Between last April and this April, Safer Schools Together identified drastic increases in “threat related behavior” and graphic or derogatory social media posts.
Back in the Memphis suburbs, Perkins and other Collierville Schools administrators have attended multiple digital threat assessment training sessions hosted by Safer Schools Together. “I’ve had to learn a lot more apps and social media than I ever thought,” Perkins said.
The knowledge, she said, came in handy during one recent incident in her district. Local police called the district to report that a student had called 911 and reported an Instagram threat targeting a particular school. They sent Perkins a photo of the Instagram profile and username. She began using open source websites to scour the internet for other appearances of the picture and username. She also used a website that allows people to view Instagram stories without alerting the user to gather more information.
With the help of police, Perkins and her team identified that the post was created by someone at the same IP address as the student who had reported the threat. The girl, who was in elementary school, confessed to police that she had done it.
The next day, Perkins and her team interviewed the student, her parents and teachers to understand her motive and goal. “It ended up that there had been some recent viral social media threats going around,” Perkins said. “This individual recognized that it drew in a lot of attention.”
Instead of expelling the girl, school administrators worked with her parents to develop a plan to manage her behavior. They came up with ideas for the girl to receive positive attention while stressing to her family that she had exhibited “extreme behavior” that signaled a need for intensive help. By the end of the day, they had tamped down concerns about immediate violence and created a plan of action.
In many other districts, Perkins said, the girl might have been arrested and expelled for a year without any support — which does not help move students away from the path of violence. “A lot of districts across our state haven’t been trained,” she said. “They’re doing this without guidance.”
Watching the cheerleaders’ TikTok video, it would be easy to miss Allison Bolinger, then the 19-year-old assistant coach. The camera quickly flashes across her standing and smiling in the corner of the room watching the pretend-dead girls.
Bolinger said she and the head coach had been next door planning future rehearsals. Bolinger entered the room soon after the students began filming and “didn’t think anything of it.” Cheerleading practice went forward as usual that afternoon. The next day, she got a call from her dad: The cheerleaders were suspended from school, and Bolinger would have to answer questions from the police.
“I didn’t even know the TikTok was posted. I hadn’t seen it,” she said. “By the time I went to go look for it, it was already taken down.” Bolinger said she ended up losing her job as a result of the incident. She heard whispers around the small community that she was responsible for allowing them to create the video.
Bolinger said she didn’t realize the video was related to school shootings when she was in the room. She often wishes she had asked them at the time to explain the video they were making. “I have beat myself up about that so many times,” she said. “Then again, they’re also children. If they don’t make it here, they’ll probably make it at home.”
Jackson, the grandmother of the 11-year-old in the video, blames Bolinger for not stopping the middle schoolers and faults the police for overreacting. She said all the students, whether or not their families hired a lawyer, got the same punishment in court: three months of probation for a misdemeanor disorderly conduct charge, which could be extended if their grades dropped or they got in trouble again. Each family had to pay more than $100 in court costs, Jackson said, a significant amount for some.
Jackson’s granddaughter successfully completed probation, which also involved writing and submitting a letter of apology to the judge. She was too scared about getting in trouble again to continue on the cheerleading team for the rest of the school year.
Jackson thinks that officials’ outsize response to the video made everything worse. “They shouldn’t even have done nothing until they investigated it, instead of making them out to be terrorists and traumatizing these girls,” she said.
Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices. Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also introducing a new feature that would allow police to request live-stream access to people’s home security devices.
This is a bad, bad step for Ring and the broader public.
Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user. It is easy to imagine that law enforcement officials will use their renewed access to Ring information to find people who have had abortions or track down people for immigration enforcement.
Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be “AI first”—whatever that means for a home security camera that lets you see who is ringing your doorbell. We fear that this may signal the introduction of video analytics or facerecognition to an already problematic surveillance device.
It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted.
Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device.
After years of serving as the eyes and ears of police, the company was compelled by public pressure to make a number of necessary changes. They introduced end-to-end encryption, they ended their formal partnerships with police which were an ethical minefield, and they ended their tool that facilitated police requests for footage directly to customers. Now they are pivoting back to being a tool of mass surveillance.
Why now? It is hard to believe the company is betraying the trust of its millions of customers in the name of “safety” when violent crime in the United States is reaching near-historically low levels. It’s probably not about their customers—the FTC had to compel Ring to take its users’ privacy seriously.
No, this is most likely about Ring cashing in on the rising tide of techno-authoritarianism, that is, authoritarianism aided by surveillance tech. Too many tech companies want to profit from our shrinking liberties. Google likewise recently ended an old ethical commitment that prohibited it from profiting off of surveillance and warfare. Companies are locking down billion-dollar contracts by selling their products to the defense sector or police.
Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.
Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.
You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here.
Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.
For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system. Now we’ve concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you’re a police chief or an independent researcher, because Axon designed it that way.
Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One’s report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they’re done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.
Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI.
One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.
But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used.
So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won’t indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk.
The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon’s first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports.
“We love having new toys until the public gets wind of them,” the administrator wrote.
No Record of Who Wrote What
The first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like:
Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible?
How often are officers finding and correcting errors made by the AI, and are there patterns to these errors?
If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer?
Is the AI overstepping in its interpretation of the audio? If a report says, “the subject made a threatening gesture,” was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says “yeah” through a conversation as a verbal acknowledgement that they’re listening to what the officer says, is that interpreted as an agreement or a confession?
Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer’s own recollection. If an officer generates a Draft One report multiple, there’s no way to tell whether the AI interprets the audio differently each time.
Axon is open about not maintaining these records, at least when it markets directly to law enforcement.
In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”
To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because “the last thing” they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit).
Following up on the same question, Axon’s Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn’t be required to save every draft of a police report as they’re re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.
The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?
It also appears that Draft One isn’t simply hewing to long–establishednorms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department’s Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It’s more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon’s engineers had yet to finalize the feature at the time it was rolled out.
One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the “guardrails” that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.
To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it’s used. But Axon has intentionally made this difficult.
What the Audit Trail Actually Looks Like
You may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means.
The first thing to note is that, based on our review of the documentation, there appears to be no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we’ll get to that in a minute).
This is disappointing because, without this information, it’s near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often.
Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:
A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
A log of an individual officer/user’s basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings.
This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs.
An example of Draft One usage in an audit log.
An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time.
But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as “I acknowledge this report was generated from a digital recording using Draft One by Axon.” If so, then an administrator can use “Draft One” as a keyword search to find relevant reports.
Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon’s most promoted clients, the Lafayette Police Department in Indiana, told us:
“Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed.”
Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff’s Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.
They told us: “We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe.”
We have requested further clarification from Axon, but they have yet to respond.
However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn’t available to the police department itself.
In response to a request from Politico’s Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports.
An Axon representative responded: “Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy.”
But then, Axon followed up: “We track which reports use Draft One internally so I exported the data.” Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future.
What is Being Done About Draft One
The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill, and any law enforcement usage would be unlawful.
Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savings, with others agencies extolling its virtues (although their data also shows that results vary even within the department).
In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It’s like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards.
Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft.
We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now… AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.
We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product.
Conclusion
Police should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system.
EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.
This isn’t the first state court to reach this conclusion, but so few courts bother to examine the science-y sounding stuff cops trot out as “evidence” that this decision is worth noting.
There’s no shortage of junk science that has been (and continues to be) treated as actual science during testimony, ranging from the DNA “gold standard” to seriously weird shit like “I can identify a suspect by the creases in his jeans.”
Anyone who’s watched a cop show has seen a detective slide a pen into a shell casing and place it gently in an evidence bag. At some point, a microscope gets involved and the prosecutor (or witness) declares affirmatively that the markings on the casing match the barrel of the murder weapon. Musical stings, ad breaks, and tidy episode wrap-ups ensue.
Maryland’s top court dismantled these delusions back in 2023 by actually bothering to dig into the supposed science behind bullet/cartridge matching. When it gazed behind the curtain, it found ATFE (Association of Firearm and Tool Mark Examiners) and its methods more than a little questionable.
To sum up (a huge task, considering this was delivered in a 128-page opinion), ATFE’s science was little more than confirmation bias. When trainees were tested, they knew one of the items they examined came from the gun used in the test. When blind testing was utilized, the nearly 80% “success” rate in matches dropped precipitously.
He observed, however, that if inconclusives were counted as errors, the error rate from that study would “balloon[]” to over 30%. In discussing the Ames II Study, he similarly opined that inconclusive responses should be counted as errors. By not doing so, he contended, the researchers had artificially reduced their error rates and allowed test participants to boost their scores. By his calculation, when accounting for inconclusive answers, the overall error rate of the Ames II Study was 53% for bullet comparisons and 44% for cartridge case comparisons—essentially the same as “flipping a coin.”
From “pretty sure” to a coin flip. Not exactly the standard expected from supposed forensic science. And that’s common across most cop forensics. When blind testing is used, error rates soar and stuff that’s supposed to be evidence looks a whole lot more like guesswork.
The same conclusion is reached here by the Oregon Court of Appeals, which ultimately reverses the lower court’s refusal to suppress this so-called evidence.
This opinion [PDF] only runs 43 pages, but it makes the same points, albeit a bit more concisely. As the lead off to the deep dive makes clear, cartridge matching isn’t science. It’s just a bunch of people looking at stuff and drawing their own conclusions.
As we will explain, in this case, the state did not meet its burden to show that the AFTE method is scientifically valid, that is, that it is capable of measuring what it purports to measure and is able to produce consistent results when replicated. That is so because the method does not actually measure the degree of correspondence between shell cases or bullets; rather, the practitioner’s decision on whether the degree of correspondence indicates a match ultimately depends entirely on subjective, unarticulated standards and criteria arrived at through the training and individualized experience of the practitioner.
For a similar reason, the state did not show that the method is replicable and therefore reliable: The method does not produce consistent results when replicated because it cannot be replicated. Multiple practitioners may analyze the same items and reach the same result, but each practitioner reaches that result based on application of their own subjective and unarticulated standards, not application of the same standards.
That’s a huge problem. Evidentiary standards exist for a reason. No court would allow people to take the stand and speculate wildly about whether or not any evidence exists that substantiates criminal charges. Tossing a lab coat over a bunch of speculation doesn’t suddenly make subjective takes on bullet markings “science.” And continuing to present this guesswork with any level of certainty perverts the course of justice.
[W]hen presented as scientific evidence, AFTE identification evidence—an “identification” purportedly derived from application of forensic science—impairs, rather than helps, the truthfinding process because it presents as scientific a conclusion that, in reality, is a subjective judgment of the examiner based only on the examiner’s training and experience and not on any objective standards or criteria.
In an effort to salvage this evidence, the government claimed the ATFE Journal was self-certifying. In other words, the fact that ATFE published this journal was evidence in and of itself of the existence of scientific rigor. Both the trial court and the appeals court disagreed:
The court rejected the idea that the AFTE Journal, which the government argued shows that the method is subject to peer review, satisfies that factor for two reasons: because the AFTE Journal “is a trade publication, meant only for industry insiders, not the scientific community,” and, more importantly, because “the purpose of publication in the AFTE Journal is not to review the methodology for flaws but to review studies for their adherence to the methodology.”
The ruling quotes many of the same studies cited by the Maryland court in its 2023 decision — the blind studies that made it clear cartridge matching is mostly guesswork. This court arrives at the same conclusion:
[T]he AFTE method, undertaken by a trained examiner, may be effective at identifying matches, but the problem is that, from what was in the record before the court, the analysis is based on training and experience— ultimately, hunches—not science…
To sum up, this method lacks anything that could be considered sound science:
Neither the AFTE theory nor the AFTE method prescribes or quantifies what the examiner is looking for; the examiner is looking for sufficient agreement, which is defined only by their own personal identification criteria.
Having arrived at this conclusion, the court does what it has to do. It reverses the lower court’s dismissal of the suspect’s suppression motion. The “error” of putting this “evidence” on the record was far from harmless. The state has already announced it plans to appeal this decision, but for now, investigators hoping shell markings will help them close some cases might want to dig a little deeper in the evidence locker.
Illinois legislators on Wednesday passed a law to explicitly prevent police from ticketing and fining students for minor misbehavior at school, ending a practice that harmed students across the state.
The new law would apply to all public schools, including charters. It will require school districts, beginning in the 2027-28 school year, to report to the state how often they involve police in student matters each year and to separate the data by race, gender and disability. The state will be required to make the data public.
The legislation comes three years after a ProPublica and Chicago Tribune investigation, “The Price Kids Pay,” revealed that even though Illinois law bans school officials from fining students directly, districts skirted the law by calling on police to issue citations for violating local ordinances.
“The Price Kids Pay” found that thousands of Illinois students had been ticketed in recent years for adolescent behavior once handled by the principal’s office — things like littering, making loud noises, swearing, fighting or vaping in the bathroom. It also found that Black students were twice as likely to be ticketed at school than their white peers.
From the House floor, Rep. La Shawn Ford, a Democrat from Chicago, thanked the news organizations for exposing the practice and told legislators that the goal of the bill “is to make sure if there is a violation of school code, the school should use their discipline policies” rather than disciplining students through police-issued tickets.
State Sen. Karina Villa, a Democrat from suburban West Chicago and a sponsor of the measure, said in a statement that ticketing students failed to address the reasons for misbehavior. “This bill will once and for all prohibit monetary fines as a form of discipline for Illinois students,” she said.
The legislation also would prevent police from issuing tickets to students for behavior on school transportation or during school-related events or activities.
The Illinois Association of Chiefs of Police opposed the legislation. The group said in a statement that while school-based officers should not be responsible for disciplining students, they should have the option to issue citations for criminal conduct as one of a “variety of resolutions.” The group said it’s concerned that not having the option to issue tickets could lead to students facing arrest and criminal charges instead.
The legislation passed the House 69-44. It passed in the Senate last month 37-17 and now heads to Gov. JB Pritzker, who previously has spoken out against ticketing students at school. A spokesperson said Wednesday night that he “was supportive of this initiative” and plans to review the bill.
The legislation makes clear that police can arrest students for crimes or violence they commit, but that they cannot ticket students for violating local ordinances prohibiting a range of minor infractions.
That distinction was not clear in previous versions of the legislation, which led to concern that schools would not be able to involve police in serious matters — and was a key reason legislation on ticketing foundered in previous legislative sessions. Students also may still be ordered to pay for lost, stolen or damaged property.
“This bill helps create an environment where students can learn from their mistakes without being unnecessarily funneled into the justice system,” said Aimee Galvin, government affairs director with Stand for Children, one of the groups that advocated for banning municipal tickets as school-based discipline.
The news investigation detailed how students were doubly penalized: when they were punished in school, with detention or a suspension, and then when they were ticketed by police for minor misbehavior. The investigation also revealed how, to resolve the tickets, children were thrown into a legal process designed for adults. Illinois law permits fines of up to $750 for municipal ordinance violations; it’s difficult to fight the charges, and students and families can be sent to collections if they don’t pay.
After the investigation was published, some school districts stopped asking police to ticket students. But the practice has continued in many other districts.
The legislation also adds regulations for districts that hire school-based police officers, known as school resource officers. Starting next year, districts with school resource officers must enter into agreements with local police to lay out the roles and responsibilities of officers on campus. The agreements will need to specify that officers are prohibited from issuing citations on school property and that they must be trained in working with students with disabilities. The agreements also must outline a process for data collection and reporting. School personnel also would be prohibited from referring truant students to police to be ticketed as punishment.
Before the new legislation, there had been some piecemeal changes and efforts at reform. A state attorney general investigation into a large suburban Chicago district confirmed that school administrators were exploiting a loophole in state law when they asked police to issue tickets to students. The district denied wrongdoing, but that investigation found the district broke the law and that the practice disproportionately affected Black and Latino students. The state’s top legal authority declared the practice illegal and said it should stop.