AI can be useful. But so many people seem to feel it’s nothing more than an unpaid intern you can lean on to do all the work you don’t feel like doing yourself. (And the less said about its misuse to generate a webful of slop, the better.)
Like everyone everywhere, police departments are starting to rely on AI to do some of the menial work cops don’t like doing themselves. And it’s definitely going poorly. More than a year ago, it was already apparent that law enforcement agencies were just pressing the “easy” button, rather than utilizing it wisely to work smarter and faster.
Axon — the manufacturer of Taser and a line of now-ubiquitous body cameras — has pushed hard for AI adoption. Even it knows AI use can swiftly become problematic if it’s not properly backstopped by humans. But the humans it sells its products too don’t seem to care for anything other than its ability to churn out paperwork with as little human involvement as possible.
The report notes that Draft One includes a feature that can intentionally insert silly sentences into AI-produced drafts as a test to ensure officers are thoroughly reviewing and revising the drafts. However, Axon’s CEO mentioned in a video about Draft One that most agencies are choosing not to enable this feature.
Yep. They just don’t care. If it means cases get tossed because sworn statements have been AI auto-penned, so be it. If someone ends up falsely accused of a crime or falsely arrested because of something AI whipped up, that’s just the way it goes. And if it adds a layer of plausible deniability between an officer and their illegal actions, even better.
Not only is the tech apparently not saving anyone much time, it’s also being abused by law enforcement officers to justify their actions after the fact. But it’s shiny and new and seems sleek and futuristic, so of course reporters will occasionally decide to do law enforcement’s PR work for it by presenting incredibly fallible tech as the 8th wonder of the police world.
Sometimes reporters bury the lede. And sometimes their editors decide the lede should be buried by the end of the headline. That appears to be the case here, where Mya Constantino’s reporting isn’t exactly what’s being touted in this article’s original headline.
As can be observed from viewing the URL, the current headline (updated January 1st) wasn’t the original headline. The Wayback Machine tells the real story. This article was originally published on December 19, 2025 with this headline:
That headline (which reads “How Utah police departments are using AI to keep streets safer”) was immediately followed by these paragraphs:
Here’s a direct quote of those leading paragraphs:
HEBER CITY, Utah — An artificial intelligence that writes police reports had some explaining to do earlier this month after it claimed a Heber City officer had shape-shifted into a frog.
However, the truth behind that so-called magical transformation is simple.
“The body cam software and the AI report writing software picked up on the movie that was playing in the background, which happened to be ‘The Princess and the Frog,'” Sgt. Keel told FOX 13 News. “That’s when we learned the importance of correcting these AI-generated reports.”
Fortunately, those paragraphs still remain in the updated post, which now contains a headline that makes a lot more sense:
The headline (accompanied by a short video of a tree frog) says:
Ribbit ribbit! Artificial Intelligence programs used by Heber City police claim officer turned into a frog
While I can understand why a small news outlet (albeit one that’s a Fox affiliate) might decide to play nice with the local cops rather than call out their software failure in the headline, it really doesn’t make it acceptable. My guess is the original headline was about maintaining access to officers and officials. At some point, someone realized the stuff detailed in the first paragraphs would probably attract more attention than some dry recitation of cop AI talking points.
But even the belated headline change doesn’t really make anything better here. There’s not really anything in the article that demonstrates how AI is making anyone safer. The article also notes that two different AI programs are currently being tested (Code Four, developed by a couple of 19-year-old former MIT students) and Draft One, which is part of Axon’s vertical integration strategy. That was the product that turned a cop into a frog, which probably explains why the reporter’s ridealong (so to speak…) only involved use of Code Four’s AI.
The reporter was on hand for a faux traffic stop that was later summarized by the AI to (apparently) demonstrate its usefulness. The journalist points out that the AI-generated report needed corrections, but at least didn’t turn any of the participants into a Disney-inspired character.
That being said, there’s nothing here that indicates these products will make streets “safer.” Here is the entirety of what was said about the tech’s positives by Sgt. Rick Keel of the Heber City PD:
Keel says one of the major draws is that the software saves them time, as writing reports typically takes 1-2 hours.
“I’m saving myself about 6-8 hours weekly now,” Keel said. “I’m not the most tech-savvy person, so it’s very user-friendly.”
Giving cops more free time doesn’t make streets safer. It just means they have more time on their hands. That’s not always a good thing. Of all the things that need to be fixed in terms of US policing, writing reports is pretty far down the list. It’s what’s being done with this extra time that actually matters. Pursuing efficiency for its own sake makes no sense in the context of law enforcement. The statements by this PD official raise questions that were never asked by the reporter, like the most important one: what is being done with this saved time? And if something still requires a lot of human activity to keep it from generating nonsense, is it really any better than the system it’s replacing?
One thing is for sure: AI doing the menial work of filing police reports is never going to make anyone safer. On the contrary, it’s only going to increase the chance that someone’s rights will be violated. And because law enforcement agencies refuse to be honest about the risks this poses and the fact that it appears only officers who don’t like writing paperwork will benefit from this added expense, they shouldn’t be trusted with tech that will ultimately only make the bad parts of US policing even worse.
Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.
Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.
You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here.
Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.
For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system. Now we’ve concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you’re a police chief or an independent researcher, because Axon designed it that way.
Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One’s report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they’re done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.
Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI.
One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.
But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used.
So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won’t indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk.
The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon’s first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports.
“We love having new toys until the public gets wind of them,” the administrator wrote.
No Record of Who Wrote What
The first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like:
Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible?
How often are officers finding and correcting errors made by the AI, and are there patterns to these errors?
If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer?
Is the AI overstepping in its interpretation of the audio? If a report says, “the subject made a threatening gesture,” was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says “yeah” through a conversation as a verbal acknowledgement that they’re listening to what the officer says, is that interpreted as an agreement or a confession?
Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer’s own recollection. If an officer generates a Draft One report multiple, there’s no way to tell whether the AI interprets the audio differently each time.
Axon is open about not maintaining these records, at least when it markets directly to law enforcement.
In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”
To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because “the last thing” they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit).
Following up on the same question, Axon’s Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn’t be required to save every draft of a police report as they’re re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.
The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn’t an agency want to maintain a record that can establish the technology’s accuracy?
It also appears that Draft One isn’t simply hewing to long–establishednorms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department’s Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It’s more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon’s engineers had yet to finalize the feature at the time it was rolled out.
One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the “guardrails” that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.
To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it’s used. But Axon has intentionally made this difficult.
What the Audit Trail Actually Looks Like
You may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means.
The first thing to note is that, based on our review of the documentation, there appears to be no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we’ll get to that in a minute).
This is disappointing because, without this information, it’s near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often.
Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:
A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
A log of an individual officer/user’s basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings.
This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs.
An example of Draft One usage in an audit log.
An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time.
But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as “I acknowledge this report was generated from a digital recording using Draft One by Axon.” If so, then an administrator can use “Draft One” as a keyword search to find relevant reports.
Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon’s most promoted clients, the Lafayette Police Department in Indiana, told us:
“Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed.”
Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff’s Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.
They told us: “We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe.”
We have requested further clarification from Axon, but they have yet to respond.
However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn’t available to the police department itself.
In response to a request from Politico’s Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports.
An Axon representative responded: “Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy.”
But then, Axon followed up: “We track which reports use Draft One internally so I exported the data.” Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future.
What is Being Done About Draft One
The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill, and any law enforcement usage would be unlawful.
Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savings, with others agencies extolling its virtues (although their data also shows that results vary even within the department).
In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It’s like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards.
Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft.
We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now… AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.
We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product.
Conclusion
Police should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system.
EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.
When your local police department buys one piece of surveillance equipment, you can easily expect that the company that sold it will try to upsell them on additional tools and upgrades.
At the end of the day, public safety vendors are tech companies, and their representatives are salespeople using all the tricks from the marketing playbook. But these companies aren’t just after public money—they also want data.
And each new bit of data that police collect contributes to a pool of information to which the company can attach other services: storage, data processing, cross-referencing tools, inter-agency networking, and AI analysis. The companies may even want the data to train their own AI model. The landscape of the police tech industry is changing, and companies that once specialized in a single technology (such as hardware products like automated license plate readers (ALPRs) or gunshot detection sensors) have developed new capabilities or bought up other tech companies and law enforcement data brokers—all in service of becoming the corporate giant that serves as a one-stop shop for police surveillance needs.
One of the most alarming trends in policing is that companies are regularly pushing police to buy more than they need. Vendors regularly pressure police departments to lock in the price now for a whole bundle of features and tools in the name of “cost savings,” often claiming that the cost à la carte for any of these tools will be higher than the cost of a package, which they warn will also be priced more expensively in the future. Market analysts have touted the benefits of creating “moats” between these surveillance ecosystems and any possible competitors. By making it harder to switch service providers due to integrated features, these companies can lock their cop customers into multi-year subscriptions and long-term dependence.
Think your local police are just getting body-worn cameras (BWCs) to help with public trust or ALPRs to aid their hunt for stolen vehicles? Don’t assume that’s the end of it. If there’s already a relationship between a company and a department, that department is much more likely to get access to a free trial of whatever other device or software that company hopes the department will put on its shopping list.
These vendors also regularly help police departments apply for grants and waivers, and provide other assistance to find funding, so that as soon as there’s money available for a public safety initiative, those funds can make their way directly to their business.
Companies like Axon have been particularly successful at using their relationships and leveraging the ability to combine equipment into receiving “sole source” designations. Typically, government agencies must conduct a bidding process when buying a new product, be it toilet paper, computers, or vehicles. For a company to be designated a sole-source provider, it is supposed to provide a product that no other vendor can provide. If a company can get this designation, it can essentially eliminate any possible competition for particular government contracts. When Axon is under consideration as a vendor for equipment like BWCs, for which there are multiple possible other providers, it’s not uncommon to see a police department arguing for a sole-source procurement for Axon BWCs based on the company’s ability to directly connect their cameras to the Fusus system, another Axon product.
Here are a few of the big players positioning themselves to collect your movements, analyze your actions, and make you—the taxpayer—bear the cost for the whole bundle of privacy invasions.
Axon Enterprise’s ‘Suite’
Axon expects to have yet another year of $2 billion-plus in revenue in 2025. The company first got its hooks into police departments through the Taser, the electric stun gun. Axon then plunged into the BWC market amidst Obama-era outrage at police brutality and the flood of grant money flowing from the federal government to local police departments for BWCs, which were widely promoted as a police accountability tool. Axon parlayed its relationships with hundreds of police departments and capture and storage of growing terabytes of police footage into a menu of new technological offerings.
In its annual year-end securities filing, Axon told investors it was “building the public safety operating system of the future” through its suite of “cloud-hosted digital evidence management solutions, productivity and real-time operations software, body cameras, in-car cameras, TASER energy devices, robotic security and training solutions” to cater to agencies in the federal, corrections, justice, and security sectors.”
Axon controls an estimated 85 percent of the police body-worn camera market. Its Evidence.com platform, once a trial add-on for BWC customers, is now also one of the biggest records management systems used by police. Its other tools and services include record management, video storage in the cloud, drones, connected private cameras, analysis tools, virtual reality training, and real-time crime centers.
An image from the Quarter 4 2024 slide deck for investors, which describes different levels of the “Officer Safety Plan” (OSP) product package and highlights how 95% of Axon customers are tied to a subscription plan.
Axon has been adding AI to its repertoire, and it now features a whole “AI Era” bundle plan. One recent offering is Draft One, which connects to Axon’s body-worn cameras (BWCs) and uses AI to generate police reports based on the audio captured in the BWC footage. While use of the tool may start off as a free trial, Axon sees Draft One as another key product for capturing new customers, despite widespread skepticism of the accuracy of the reports, the inability to determine which reports have been drafted using the system, and the liability they could bring to prosecutions.
In 2024, Axon acquired a company called Fusus, a platform that combines the growing stores of data that police departments collect—notifications from gunshot detection and automated license plate reader (ALPR) systems; footage from BWCs, drones, public cameras, and sometimes private cameras; and dispatch information—to create “real-time crime centers.” The company now claims that Fusus is being used by more than 250 different policing agencies.
Fusus claims to bring the power of the real-time crime center to police departments of all sizes, which includes the ability to help police access and use live footage from both public and private cameras through an add-on service that requires a recurring subscription. It also claims to integrate nicely with surveillance tools from other providers. Recently, it has been cutting ties, most notably with Flock Safety, as it starts to envelop some of the options its frenemies had offered.
In the middle of April, Axon announced that it would begin offering fixed ALPR, a key feature of the Flock Safety catalogue, and an AI Assistant, which has been a core offering of Truleo, another Axon competitor.
Flock Safety’s Bundles and FlockOS
Flock Safety is another major police technology company that has expanded its focus from one primary technology to a whole package of equipment and software services.
Flock Safety started with ALPRs. These tools use a camera to read vehicle license plates, collecting the make, model, location, and other details which can be used for what Flock calls “Vehicle Fingerprinting.” The details are stored in a database that sometimes finds a match among a “hot list” provided by police officers, but otherwise just stores and shares data on how, where, and when everyone is driving and parking their vehicles.
Much of what Flock Safety does now comes together in their FlockOS system, which claims to bring together various surveillance feeds and facilitate real-time “situational awareness.”
When you think of Motorola, you may think of phones—but there’s a good chance that you missed the moment in 2011 when the phone side of the company, Motorola Mobility, split off from Motorola Solutions, which is now a big player in police surveillance.
On its website, Motorola Solutions claims that departments are better off using a whole list of equipment from the same ecosystem, boasting the tagline, “Technology that’s exponentially more powerful, together.” Motorola describes this as an “ecosystem of safety and security technologies” in its securities filings. In 2024, the company also reported $2 billion in sales, but unlike Axon, its customer base is not exclusively law enforcement and includes private entities like sports stadiums, schools, and hospitals.
Motorola’s technology includes 911 services, radio, BWCs, in-car cameras, ALPRs, drones, face recognition, crime mapping, and software that supposedly unifies it all. Notably, video can also come with artificial intelligence analysis, in some cases allowing law enforcement to search video and track individuals across cameras.
In January 2019, Motorola Solutions acquired Vigilant Solutions, one of the big players in the ALPR market, as part of its takeover of Vaas International Holdings. Now the company (under the subsidiary DRN Data) claims to have billions of scans saved from police departments and private ALPR cameras around the country. Marketing language for its Vehicle Manager system highlights that “data is overwhelming,” because the amount of data being collected is “a lot.” It’s a similar claim made by other companies: Now that you’ve bought so many surveillance tools to collect so much data, you’re finding that it is too much data, so you now need more surveillance tools to organize and make sense of it.
SoundThinking’s ‘SafetySmart Platform’
SoundThinking began as ShotSpotter, a so-called gunshot detection tool that uses microphones placed around a city to identify and locate sounds of gunshots. As news reports of the tool’s inaccuracy and criticisms have grown, the company has rebranded as SoundThinking, adding to its offerings ALPRs, case management, and weapons detection. The company is now marketing its SafetySmart platform, which claims to integrate different stores of data and apply AI analytics.
In 2024, SoundThinking laid out its whole scheme in its annual report, referring to it as the “cross-sell” component of their sales strategy.
The “cross-sell” component of our strategy is designed to leverage our established relationships and understanding of the customer environs by introducing other capabilities on the SafetySmart platform that can solve other customer challenges. We are in the early stages of the upsell/cross-sell strategy, but it is promising – particularly around bundled sales such as ShotSpotter + ResourceRouter and CaseBuilder +CrimeTracer. Newport News, VA, Rocky Mount, NC, Reno, NV and others have embraced this strategy and recognized the value of utilizing multiple SafetySmart products to manage the entire life cycle of gun crime…. We will seek to drive more of this sales activity as it not only enhances our system’s effectiveness but also deepens our penetration within existing customer relationships and is a proof point that our solutions are essential for creating comprehensive public safety outcomes. Importantly, this strategy also increases the average revenue per customer and makes our customer relationships even stickier.
Many of SoundThinking’s new tools rely on a push toward “data integration” and artificial intelligence. ALPRs can be integrated with ShotSpotter. ShotSpotter can be integrated with the CaseBuilder records management system, and CaseBuilder can be integrated with CrimeTracer. CrimeTracer, once known as COPLINK X, is a platform that SoundThinking describes as a “powerful law enforcement search engine and information platform that enables law enforcement to search data from agencies across the U.S.” EFF tracks this type of tool in the Atlas of Surveillance as a third-party investigative platform: software tools that combine open-source intelligence data, police records, and other data sources, including even those found on the dark web, to generate leads or conduct analyses.
SoundThinking, like a lot of surveillance, can be costly for departments, but the company seems to see the value in fostering its existing police department relationships even if they’re not getting paid right now. In Baton Rouge, budget cuts recently resulted in the elimination of the $400,000 annual contract for ShotSpotter, but the city continues to use it.
“They have agreed to continue that service without accepting any money from us for now, while we look for possible other funding sources. It was a decision that it’s extremely expensive and kind of cost-prohibitive to move the sensors to other parts of the city,” Baton Rouge Police Department Chief Thomas Morse told a local news outlet, WBRZ.
Beware the Bundle
Government surveillance is big business. The companies that provide surveillance and police data tools know that it’s lucrative to cultivate police departments as loyal customers. They’re jockeying for monopolization of the state surveillance market that they’re helping to build. While they may be marketing public safety in their pitches for products, from ALPRs to records management to investigatory analysis to AI everything, these companies are mostly beholden to their shareholders and bottom lines.
The next time you come across BWCs or another piece of tech on your city council’s agenda or police department’s budget, take a closer look to see what other strings and surveillance tools might be attached. You are not just looking at one line item on the sheet—it’s probably an ongoing subscription to a whole package of equipment designed to challenge your privacy, and no sort of discount makes that a price worth paying.
The Anchorage Police Department (APD) has concluded its three-month trial of Axon’s Draft One, an AI system that uses audio from body-worn cameras to write narrative police reports for officers—and has decided not to retain the technology. Axon touts this technology as “force multiplying,” claiming it cuts in half the amount of time officers usually spend writing reports—but APD disagrees.
The APD deputy chief told Alaska Public Media, “We were hoping that it would be providing significant time savings for our officers, but we did not find that to be the case.” The deputy chief flagged that the time it took officers to review reports cut into the time savings from generating the report. The software translates the audio into narrative, and officers are expected to read through the report carefully to edit it, add details, and verify it for authenticity. Moreover, because the technology relies on audio from body-worn cameras, it often misses visual components of the story that the officer then has to add themselves. “So if they saw something but didn’t say it, of course, the body cam isn’t going to know that,” the deputy chief continued.
The Anchorage Police Department is not alone in claiming that Draft One is not a time saving device for officers. A new study into police using AI to write police reports, which specifically tested Axon’s Draft One, found that AI-assisted report-writing offered no real time-savings advantage.
This news comes on the heels of policymakers and prosecutors casting doubt on the utility or accuracy of AI-created police reports. In Utah, a pending state bill seeks to make it mandatory for departments to disclose when reports have been written by AI. In King County, Washington, the Prosecuting Attorney’s Office has directed officers not to use any AI tools to write narrative reports.
In an era where companies that sell technology to police departments profit handsomely and have marketing teams to match, it can seem like there is an endless stream of press releases and local news stories about police acquiring some new and supposedly revolutionary piece of tech. But what we don’t usually get to see is how many times departments decide that technology is costly, flawed, or lacks utility. As the future of AI-generated police reports rightly remains hotly contested, it’s important to pierce the veil of corporate propaganda and see when and if police departments actually find these costly bits of tech useless or impractical.
It often seems that when people have no good ideas or, indeed, any ideas at all, the next thing out of their mouths is “maybe some AI?” It’s not that AI can’t be useful. It’s that so many use cases are less than ideal.
Enter Axon, formerly Taser, which has moved from selling modified cattle prods to cops to selling them body cameras. The shift makes sense. Policy makers want to believe body cameras will create more accountability in police forces that have long resisted this. Cops don’t mind this push because it’s far more likely body cam footage will deliver criminal convictions than it will force them to behave better when wielding the force of law.
Axon wants to keep cops hooked on body cams. It hands them out like desktop printers: cheap entry costs paired with far more expensive, long-term contractual obligations. Buy a body cam from Axon on the cheap and expect to pay fees for access and storage for years to come. Now, there’s another bit of digital witchery on top of the printer cartridge-esque access fees: AI assistance for police reports.
Theoretically, it’s a win. Cops will spend less time bogged down in paperwork and more time patrolling the streets. In reality, it’s something else entirely: the abdication of responsibility to algorithms and a little more space separating cops from accountability.
AI can’t be relied on to recap news items coherently. It’s already shown it’s capable of “hallucinating” narratives due to the data it relies on or has been trained on. There’s no reason to believe that, at this point, AI is capable of performing tasks cops have been doing for years: writing up arrest/interaction reports.
The problem here is that a bogus AI-generated report causes far more real-world pain than that experienced by news agencies that endure momentary public shaming or lawyers being chastised by judges. People can lose their rights and their actual freedom if AI concocts a narrative that supports the actions taken by officers. Even at its best, AI should not be allowed to determine whether or not people have access to their rights or literal freedom.
“Police reports play a crucial role in our justice system,” ACLU speech, privacy and technology senior policy analyst and report author Jay Stanley wrote. “Concerns include the unreliability and biased nature of AI, evidentiary and memory issues when officers resort to this technology, and issues around transparency.
“In the end, we do not think police departments should use this technology,” Stanley concluded.
There’s more in this article from The Register than just some summarizing of the ACLU’s comprehensive report [PDF]. It also features input from people who’ve actually done this sort of work on the ground level who align themselves with the ACLU’s criticism, rather than the government agencies they worked for. This is from Brandon Vigliarolo, who wrote this op-ed for El Reg:
In my time as a Military Policeman in the US Army, I spent plenty of time on shifts writing boring, formulaic, and necessarily granular reports on incidents, and it was easily the worst part of my job. I can definitely sympathize with police in the civilian world, who deal with far worse – and more frequent – crimes than I had to address on small bases in South Korea.
That said, I’ve also had a chance to play with modern AI and report on many of its shortcomings, and the ACLU seems to definitely be on to something in Stanley’s report. After all, if we can’t even trust AI to write something as legally low-stakes as news or a bug report, how can we trust it to do decent police work?
The answer is we can’t. We can’t do it now. And there’s a solid chance we can’t do it ever.
Both Axon and law enforcement agencies choosing to utilize this tech will claim human backstops will prevent AI from hallucinating someone into jail or manufacturing justification for civil rights violations. But that’s obviously not true. And that’s been confirmed by Axon itself, whose future business relies on future uptake of its latest tech offering.
In an ideal world, Stanley added, police would be carefully reviewing AI-generated drafts, but that very well may not be the case. The report notes that Draft One includes a feature that can intentionally insert silly sentences into AI-produced drafts as a test to ensure officers are thoroughly reviewing and revising the drafts. However, Axon’s CEO mentioned in a video about Draft One that most agencies are choosing not to enable this feature.
This leading indicator suggests cop shops are looking for a cheap way to relieve the paperwork burden on officers, presumably to free them up to do the more important work of law enforcement. The lower cost/burden seems to be the only focus, though. Even when given something as simple as a single-click option to ensure better human backstopping of AI-generated police reports, agencies are opting out because, apparently, it might mean some reports will be rejected and/or the thin veil of plausible deniability might be pierced.
That’s part of the bargain. If a robot writes a report, officers can plausibly claim discrepancies between reports and recordings aren’t their fault. But that’s not even the only problem. As the ACLU report notes, there’s a chance AI-generated reports will decided something “seen” or “heard” in recordings supports officers’ actions, even if human review of the same footage would see clear rights violations.
The other problem is inadvertent confirmation bias. In an ideal world, any arrest or interaction that has resulted in questionable force deployment — especially when cops kill someone — cops would need to give statements before they’ve had a chance to review recordings. This would help eliminate post facto narratives that remove contradictory statements and allow officers to agree upon an exonerative narrative. Allowing AI to craft reports from uploaded footage undercuts this necessary time-and-distance factor, giving cops’ cameras the chance to tell the story before the cops have even come up with their own.
Now, it might seem that would be better. But I can guarantee you that if the AI report doesn’t agree with the officer’s report in disputed situations, the AI-generated report will be kicked to the curb. And it works the other way, too.
Even the early adopters of body cams found a way to make this so-called “accountability” tech work for them. When the cameras weren’t being turned on or off to suit narrative needs, cops were attacking compliant arrestees while yelling things like “stop resisting” or claiming the suspect was trying to grab one of their weapons. The subjective angle, coupled with extremely subjective statements in the recordings, was leveraged to provide justification for any lovely of force deployed. AI is incapable of separating cop pantomime from what’s captured on tape, which means all cops have to do to talk a bot into backing their play is say a bunch of stuff that sounds like probable cause while recording an arrest or search.
We already know most law enforcement agencies rarely proactively review body cam footage. And they’re even less likely to review reports and question officers if things look a bit off. Most agencies don’t have the personnel to handle proactive reviews, even if they have the desire to engage in better oversight. And an even larger percentage lack the desire to police their police officers, which means there will never be enough people in place to check the work (and paperwork) of law enforcers.
Adding AI won’t change this equation. It will just make direct oversight that much simpler to abandon. Cops won’t be held accountable because they can always blame discrepancies on the algorithm. And the tech will encourage more rights violations because it adds another layer of deniability officers and their supervisors can deploy when making statements in state courts, federal courts, or the least-effective court of all, the court of public opinion.
These are all reasons accountability-focused legislators, activists, and citizens should oppose a shift to AI-enhanced police reports. And they’re the same reasons that will encourage rapid adoption of this tech by any law enforcement agency that can afford it.
In the only country in the world where this sort of violence happens frequently enough it’s become a despairing meme, legislators continue to ignore the obvious solutions in favor of throwing money at esoteric options that won’t stop Americans from entering schools to murder children en masse.
In Texas, this problem hits home harder. Not only does the state do everything it can to encourage gun ownership, it is also home to one of the more devastating school shootings in recent history — one in which Uvalde, Texas police officers rushed to the scene of school shooting only to spend nearly 90 minutes doing nothing about it.
The simple answer would be stricter gun control laws. But no Texan legislator is willing to do that, not if they expect to be re-elected. And there are plenty of people who claim the Second Amendment is the best amendment, because arming citizens means the government will be too scared to engage in overreach lest it get [checks notes] shot the fuck up.
In reality, most Second Amendment enthusiasts aren’t arming themselves to prevent the government from being overtaken by authoritarians. After all, they voted for Trump at least twice, and he’s the kind of autocrat their window decals have warned against. Most exercises of the Second Amendment are purely performative — “rolling coal” but it’s dudes in camo walking through Walmarts strapped with AR-15s.
And, given recent election results, perhaps now is not the time to start asking questions about the Second Amendment, not when the inbound president has said stuff that might make us genuinely concerned about our Third Amendment rights.
Texas public schools could be guarded by drones armed with pepper spray or Tasers under a new bill filed in the Texas Legislature meant to beef up school security.
The measure would boost funding for safety upgrades and let schools deploy drones in place of the armed guards that lawmakers required on every campus in response to the Uvalde school shooting. Districts have said they don’t have the money to make those hires, and Hearst Newspaper previously found many haven’t complied or have instead armed teachers.
Normal people will obviously ask questions about this proposal. Non-normal people are the ones who won’t ask questions, because it doesn’t threaten their “right” to own guns. But it is completely asinine for multiple reasons.
First of all, pepper spray is not something that can be safely deployed from a distance. It’s aerosolized, which means anyone in the area can be negatively affected, even if the intent is to disable an armed suspect. It has to be deployed up close and directly at the eyes/nasal passages of the target. Unless the drones are going to zooming down to eye level with incoming school shooters, this method has as much potential to harm innocent students and teachers as it has to incapacitate a threat.
Tasers are no better. Closer is better and even Taser maker Axon — or at least its board of ethics — has already objected to mounting Tasers on drones. Of course, none of that objection really matters. The entire Axon ethics board resigned following the company’s announcement it would pursue this option, only to see the company acquire a drone manufacturer that does frequent business with the US Department of Defense.
But even if the tech aligns, it’s still a bad idea. Tasers are not precision weapons. They do their best at close range and, even then, they’re not guaranteed to incapacitate. Firing Taser darts at a moving target from a moving object isn’t a recipe for success.
All in all, it’s about as stupid as thinking the solution is arming teachers. Teachers should not be expected to do the work of trained law enforcement. And teachers should never be expected to consider trading fire with school shooters a part of their job description. Again, the problem is the easy access to weapons, not the lack of defensive options. If anything, dumbass legislators should be advocating for arming students. After all, there are far more students than teachers and surely the presence of 30 or so “good guys with guns” in any classroom would be a significant deterrent to school shootings.
Despite the absolute lack of anything indicating arming drones this way will result in fewer school shootings, government contractors are encouraged by the willingness of legislators to throw money at bad ideas. Mithril Defense has already posted its own opening for an aspiring person who’s apparently going to get paid only a commission for talking more legislators into buying more drones, Tasers, and drone-mounted pepper sprayers.
This CV (of sorts) is inadvertently hilarious:
Our team includes a former Navy SEAL team SIX Command Master Chief, a serial tech entrepreneur, the #1 American drone pilot on ESPN, and various technical teams.
If there’s anything worse for anything than a “serial tech entrepreneur,” I’ve never met it. And despite the presence of SEAL Team Six and ESPN in the write-up, I’m far more interested in the makeup of the “various technical teams.” Keep in mind, this is an hourly position with “upside based on success.” To those interested, I would assume this means minimum wage and an immediate culling from the “advocacy group” once Mithril Defense secures a government contract.
Perhaps the best slam is inadvertent. Here’s how the San Antonio paper describes the company:
Employees for the company, which appears to be named after a magical silver-colored metal from “Lord of the Rings…”
And it would appear the “serial tech entrepreneur” is its founder, Justin Marston, whose LinkedIn profile shows he’s never been able to make anything go ever.
NONETHELESS, Texas legislators — led by Rep. Ryan Guillen — think this is what will solve our school shooting problem.
Guillen’s bill says the drones would be armed with “less lethal interdiction capability by means of air-based irritant delivery or other mechanisms,” and it would require one drone for every 200 students.
lol
Just flying around in the air dispensing “air-based irritants” like it’s not going to “irritate” the people it’s supposed to be saving far more often than it’s preventing school shootings. And, if I’m doing the math right, this means the state’s educational facilities need to acquire nearly 26,000 drones to comply with the law. And only Guillen knows why one-drone-per-200-students is the appropriate ratio to prevent school shootings, but that’s what he and his supporters are going with.
Not that schools are getting fucked entirely. A state that can’t seem to get behind school funding in any meaningful way will perhaps be talked into funding schools in the most meaningless way. Guillen’s bill amps up per student funding from $10/per to $100/per… but only if that extra money is spent on “hardening campuses, hiring security guards, or starting a drone program.” There will be no educational advantage here. Instead, students will see that extra money being spent on surrounding them with weapons that aren’t actually guns but are allegedly going to protect them from actual guns.
This is dumb stuff that ignores the real issue. It’s not going to save any students from school shooters. But, if implemented, it will cost Texas millions of dollars in the furtherance of nothing more than respecting their right to head out on a shopping trip with a load out that might seem excessive in Call of Duty multi-player. God Bless America.
Three years ago, the Fifth Circuit Appeals Court somehow arrived at the conclusion that tasing someone soaked in gasoline — an act of escalation that not only killed the suicidal person officers were supposed to rescuing but also burned the entire residence to the ground — was not excessive force. It was supposedly justified by the gasoline-soaked man’s threats that he would burn himself and the house down if officers kept advancing on him.
Robbing him of his life and his remaining autonomy, Arlington, Texas police officer Officer Guadrama discharged his Taser and made the man’s threats a reality. And it was still just considered to be the sort of thing cops should be doing by the Fifth Circuit court.
It went the other way here. In a California court, a federal judge has arrived at the opposite conclusion in a nearly identical incident. (via Courthouse News Service)
In this case, Paul Hall was despondent because his family refused to interact with him, apparently “fed up with him” for reasons that go unexplained. Feeling abandoned, Hall soaked himself in gasoline, sat on the floor in the middle of the house, and threatened to light himself on fire.
Officer John Gale of the Weed, California police department responded to the call. His actions, as well as those of Paul Hall, were captured by the officer’s body camera. To his credit, Officer Gale at least made some effort to defuse the situation by talking to Hall, who repeatedly reminded him he was covered in gasoline and ready to take his own life by igniting the lighter he held in one of his hands.
When that didn’t work, Gale tried to take the lighter by force by attempting to wrestle it out of Hall’s hands. When that didn’t work, Gale went back to his first tactic: yelling repeatedly for Hall to drop the lighter. This tactic didn’t work the first few dozen times, but according to the footage, Gale did this same thing more than 50 times, perhaps expecting he was due for a win.
Right before he set Hall on fire with his Taser, Officer Gale again ordered Hall to “drop the lighter” and to “put it down.” And right before his fired at Hall, Hall dropped his hands to his sides, possibly on his way to complying. But he never got a chance. That’s when Gale fired and that’s when Hall caught on fire.
Gale first insisted this wasn’t excessive force. The court says in some cases, these actions might not have been. But in this case, at best, that’s still an open question. And the reason it’s still a set of disputed facts is because the officer’s own body cam footage (arguably) contradicts his assertions. From the decision [PDF]:
Defendant Gale’s repeated assertion that Plaintiff Hall “appeared to be flicking the lighter to start” at the time Defendant Gale shot his taser is disputed by Plaintiff and arguably contradicted by the body camera footage […] Upon review of the body camera footage, it is not undisputedly apparent to the Court that Plaintiff Hall appeared to be flicking the lighter to start. Thus, a reasonable jury could conclude, during his interactions with Defendant Gale, Plaintiff Hall did not attempt to ignite the lighter such that he posed an immediate threat that warranted intermediate force.
Then there’s the fact it appears Hall was finally attempting to comply with Gale’s demands moments before Gale decided to deploy his Taser.
Second, Plaintiff Hall alleges he complied with Defendant Gale’s commands to put down the lighter by moving his hands down by his side, including the one holding the lighter. The body camera footage confirms, shortly before Defendant Gale tased Plaintiff Hall, Plaintiff Hall had dropped both hands, including the one holding the lighter. The body camera footage also shows Defendant Gale shot Plaintiff Hall with the taser after Plaintiff Hall had dropped both of his hands. A reasonable jury could conclude any threat related to the lighter dissipated the moment Plaintiff Hall put his hands down.
That’s strike two. Strike three is the undeniable fact Hall wasn’t threatening anyone other than himself. And there’s plenty of evidence on the record that Officer Gale couldn’t have reasonably believed he was a threat to others because the officer made no attempt to remove other people from the house, didn’t even bother to bring in the fire extinguisher he had in his squad car, or hold off on taking any action until the fire department arrived. If he really thought he needed to save others from the immediate threat of a fire, he would have taken those actions. In the end, he was the one to ignite the fire that threatened others, all while claiming this was the only way to prevent the man he set on fire from harming other people.
And here’s where the decision referenced in the opening of this post comes into play. Completely ridiculously, Officer Gale cited that decision in support of his qualified immunity request despite (1) the case was handled by a different circuit, (2) the decision issued by the Fifth was non-precedential, and (most importantly) (3) had been issued two years after he set Paul Hall on fire. As any plaintiff knows and every cop defendant should know, you can’t cite something as precedent when it happens after the incidents in dispute. The clue is in the goddamn word, which requires something to precede something else to be relevant, not arrive after the fact.
Immunity is denied because even if the court were inclined to treat a non-binding decision issued two years after Officer Gale set Paul Hall on fire with his taser, the facts of the cases are different enough Officer Gale couldn’t reasonably believe non-binding non-precedent put him in the clear for deciding setting someone on fire for the crime of threatening to set themselves on fire was justified.
It’s bad enough the body cam footage contradicted the officer’s claims. It’s even worse that his lawyer thought he could get some QI for his client by time-traveling to the future (so to speak) to find cases supporting his client’s actions.
Axon — having apparently exhausted the market for Tasers — has moved on to hawking body cameras to police departments. The cameras are the loss leaders. The real money comes from perpetual service contracts and access fees. With every new feature added to Axon’s line of products, the difficulty level of switching manufacturers and service providers increases.
And that’s exactly why Axon decided to add some AI to the mix. It’s one more thing that, once established, would be nearly impossible to easily replace on the fly, should a law enforcement agency consider taking their business to a competitor.
The AI isn’t there to help identify objects or people captured by Axon’s body cams (although that’s likely on its way as well). Axon thinks the future of policing involves trimming the time officers spend writing reports, theoretically freeing them up to do more policing and less paperwork.
On Tuesday, Axon, the $22 billion police contractor best known for manufacturing the Taser electric weapon, launched a new tool called Draft One that it says can transcribe audio from body cameras and automatically turn it into a police report. Cops can then review the document to ensure accuracy, Axon CEO Rick Smith told Forbes. Axon claims one early tester of the tool, Fort Collins Colorado Police Department, has seen an 82% decrease in time spent writing reports. “If an officer spends half their day reporting, and we can cut that in half, we have an opportunity to potentially free up 25% of an officer’s time to be back out policing,” Smith said.
Well, given the number of debacles creating by over-reliance on AI, this hardly seems like an ideal growth market. But Axon seems pretty convinced cops will grow to love this tech tool. And they might! After all, they’re not nearly as concerned about the collateral damage AI-enhanced report writing might cause as the people who are the most likely victims of this collateral damage… which would be pretty much everyone but the cops themselves.
Fortunately, there’s already been some pushback against Axon’s shiny new toy. And it comes from kind of an unexpected source: prosecutors. Law enforcement agencies in Washington State’s most populous county are being told in no uncertain terms, AI-crafted police reports are not welcome here. (h/t EFF)
The King County Prosecuting Attorney’s Office (KCPAO) has instructed police agencies to not use Artificial Intelligence (AI) when writing reports.
In a memo to police chiefs sent this week, Chief Deputy Prosecutor Daniel J. Clark said any reports written with the assistance of AI will be rejected due to the possibility of errors.
“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now,” Clark’s memo states. “AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.”
What’s being stated here isn’t speculation. It’s already happened. Clark cited an AI-assisted report received by prosecutors that referenced an officer who wasn’t actually at the scene.
Axon, however, still remains bullish on its new offering. Its statement in response to this announcement by the KCPAO says a lot of things that sound meaningful, but are ultimately meaningless once you understand every asserted backstop relies on cops doing their job thoroughly, honestly, and competently.
“Agencies have various considerations when implementing new public safety technology and Axon is dedicated to offering comprehensive resources to support them throughout this process as well as addressing questions or concerns. With Draft One, initial report narratives are drafted strictly from the audio transcript from the body-worn camera recording and Axon calibrated the underlying model for Draft One to minimize speculation or embellishments. Police narrative reports continue to be the responsibility of officers and critical safeguards require every report to be edited, reviewed and approved by a human officer, ensuring accuracy and accountability of the information. Axon rigorously tests our AI-enabled products and adheres to a set of guiding principles to ensure we innovate responsibly, including building in controls so that human decision-making is never removed in critical moments.“
Police reports have never been the paragon of accuracy. And when cops need to cover something up, they’re filled with deliberate misstatements (we call those “lies” in the civilian world) and omissions. Claiming adding AI to the mix will ultimately be OK because cops are the final backstop for accuracy belies a willful ignorance of how this process works in the real world.
The only meaningful move being made here is the ban on AI-assisted reports by prosecutors. This means any agency that’s currently paying for Draft One access should — if the King County prosecutor’s office is serious about this — have all of its reports rejected out-of-hand until access is removed and/or Draft One contracts are terminated.
So, for now, King County will only allow human-generated narrative “hallucinations” to be used during prosecutions. And while it’s not much to cheer about, at least it prevents officers from distancing themselves from their lies by blaming software for inconsistencies in their statements.
Excited delirium just won’t go away. No medical association recognizes this condition as factually true. And no cop shop will ever move away from using it as a handy excuse for in-custody killings, at least not until forced to by state legislators.
Excited delirium actually pre-dates its current status as the go-to excuse for cops when they kill someone. Even then, it was questionable. But it really took off when Taser started supplying officers with tasers, which were immediately linked to several in-custody deaths, despite being advertised as a “less-than-lethal” force option. Taser’s lawyers (and supposed medical experts) offered testimony claiming people restrained or electrocuted to death were actually dying of a completely unrelated medical condition.
This was taken as gospel by cops who didn’t want to be held accountable for killing people — especially people who were suffering from mental health issues and, in most cases, were unarmed when multiple officers delivered electric shocks and/or piled on top of their prone bodies until they suffocated.
One of the most infamous murders committed by a cop — the murder of George Floyd by Minneapolis police officer Derek Chauvin — had its own “excited delirium” nexus. Officer Thomas Lane, who watched Chauvin kneel on Floyd’s neck for nine minutes, put on this performance for his body camera as he did nothing to prevent the killing he was witnessing:
I am worried about excited delirium or whatever.
“Or whatever.” And define “worry,” since it clearly didn’t mean offering any medical assistance whatsoever to the person this cop exoneratively declared might be suffering from a medical condition only cited by cops in the aftermath of an in-custody killing.
Four years ago, documents obtained by a public records requester showed the Charlotte-Mecklenburg PD was filling officers’ heads with disinformation — not only claiming “excited delirium” was a legitimate medical condition, but also that people who literally could not breathe due to officers’ restraint tactics were just informing officers of this so-called medical condition when they said they were having difficulty breathing. Listed among the “symptoms” of “excited delirium” were these:
Since then, some things have changed. Officer Chauvin murdered George Floyd and, in an unexpected development, was actually convicted of murder. In a few localities, officials have passed laws forbidding officers or coroners from citing excited delirium as the cause of death.
The last holdouts in the medical profession have finally agreed “excited delirium” is a BS diagnosis, primarily because the only people who ever make this medical conclusion wear badges and have recently killed people, almost all of them unarmed.
But the Rochester, New York Police Department still wants to treat excited delirium as a legitimate medical condition. Of course, that’s only because it gives officers an out when they’ve killed someone and definitely not because anyone on or off the force actually believes it’s anything more than a handy excuse for police brutality.
Training materials obtained by Jenny Wadhwa and uploaded to MuckRock contain the usual excited delirium bullshit, along with a PowerPoint slide [PDF] that says the same thing the Charlotte PD’s training materials say: people saying “I can’t breathe” are just in the (life-threatening) throes of an excited delirium episode:
Also fun to note is that the term “unlimited endurance” is declared a symptom when all it actually means is that the person being restrained managed to tire out some of the out-of-shape cops who responded to the scene. And I’m not just making generalizations about cops, donuts, and the fact that most of them spend most of their hours sitting in cars. It’s a fact: most cops can be worn down by anyone in semi-decent physical shape.
Although the physical requirements of police work suggest the importance of maintaining a healthy weight status, recent research suggests that 40.5% of American police officers are obese3),which is a prevalence rate above the national average of 35.5% for adult men and 35.8% for adult women4)
In this context, “superhuman strength” and “unlimited endurance” should probably just be read as “regular human strength” and “regular human endurance.”
In fact, the so-called “training” is best read as an exhortation to commit violence while providing officers with an exonerative cover story. The slides say it can be triggered by the use of either illegal or legal drugs.
Death usually follows a bizarre behavior episode and/ or use of illegal drugs or prescription medication
In practice, this means literally any chemical substance found in the body during a coroner’s examination can be used to buttress “excited delirium” claims.
It also claims there are four stages in the “excited delirium” progression, with the end result being apparently inevitable.
Elevated body temperature
Agitation
Respiratory arrest
Death
But that does nothing to explain why people only die of “excited delirium” after being tased/restrained/brutalized by cops. No one has ever reported someone just died of “excited delirium” without the application of force by police officers. So, even if we were so careless as to accept the theory of “excited delirium” as a legitimate medical condition, it would be even more careless to immediately discount this outside factor that is present in 100% of excited delirium deaths.
This isn’t training — at least not in the sense those of us in the private sector are used to. What’s being imparted here is a justification for excessive force deployment and a preconceived narrative for in-custody killings. We would be legitimately upset to discover companies are training employees how to dodge regulatory oversight and provide them with immediate plausible deniability for their actions. We should be way more upset that people paid with our tax dollars are literally encouraging police brutality by preemptively providing police officers with a pseudo-scientific explanation for the killings they may decide to commit.
Retailers have increased their reliance on cameras over the years to cut down on retail theft. In more recent years, they’ve been adding more tech to their surveillance arsenal, including automatic plate readers in their parking lots and facial recognition capabilities to their existing CCTV networks.
And yet, the nation is inundated with (mostly anomalous and anecdotal) new reports about unprecedented increases in retail theft. Of course, this isn’t indicative of the retail sector as a whole — something that was pointed out by none other than a group that specializes in retail analysis. Its take on the retail theft “spike” was that while there were some isolated areas where this was a problem, in most cases, it was retailers seeking to hide other business failures under the eye-grabbing area rug of “retail crime spree” headlines.
Whether or not the crime wave is real, retailers have always been trying to limit “shrink,” the in-house term used to cover everything from smash-and-grab robberies to employees skimming cash from the tills.
Retail giant TJX, the parent of TJ Maxx, Marshalls and HomeGoods, said it’s equipping some store employees with body cameras to thwart shoplifting and keep customers and employees safe.
TJX finance chief John Klinger disclosed the body-camera initiative on an earnings call last month. “It’s almost like a de-escalation, where people are less likely to do something when they’re being videotaped,” he said.
Jesus. It’s not even the extra surveillance or the possibility the cameras might be abused by employees to capture footage they definitely should be capturing (like wandering around restrooms or changing rooms). That part of it is only a minor concern.
The biggest concern is the one brushed off so easily by TJX’s “finance chief.” His belief that equipping low-paid hourly workers with body cams will “de-escalate” anything is indicative of his ignorance. These are the words of someone who’s never worked on the floor of a retail outlet at any point in his life.
Do you know what happens when an hourly employee in full uniform starts following a suspected thief? In far too many cases, it escalates things. Sure, there are a few would-be thieves who will abandon their shoplifting plans when it’s apparent they’re being scrutinized. But if people have wandered into a store to commit crimes, the most common response is to go after the person who appears to be trailing them around the store. At best, it will just be verbal attacks. At worst, it will be physical violence.
And it’s not like any of the executives quoted or referred to in this story are suggesting they’ll be paying employees more for providing customer service while also acting as surveillance cameras. Most large retailers have in-house “loss prevention” teams that operate in plain clothes, making it much easier to catch thieves while remaining undetected.
Pinning a camera to the shirt of a uniformed employee just puts them in additional danger and subjects them to additional verbal abuse, because even non-shoplifters are going to get mouthy when they notice their actions are being recording by someone who’s just hourly.
Here’s what the rollout looks like so far:
The job of these security workers “was to just stand there with the tactical vest labeled ‘security,’ and the camera mounted on the vest,” said the employee, who spoke under the condition of anonymity because they were not authorized to speak to reporters.
“It feels like the implementation of this program with the cameras isn’t meant to achieve anything, but rather just something the company can point to” to say it is improving security.
These employees are instructed — as almost all retail employees are — to stay out of actual theft situations. They’re instructed not to stop or pursue suspected thieves.
So, why even bother placing someone at the front of the store with a uniform and a camera if you’re not actually trailing suspected thieves? Well, one has to assume these employees will become much more mobile if the retailer desires it. A Marshalls store in Miami Beach, Florida states that the cameras are designed to record “specific events involving critical incidents for legal, safety, and training purposes.” To accomplish that, the employee — the one wearing the camera — will be expected to be on the scene in incidents like these, which is generally not what hourly employees are expected to do in retail environments.
It’s truly a half-assed rollout and it’s setting everyone up for failure. Either it will make non-loss prevention hourly jobs more dangerous, or it will have zero effect because would-be criminals haven’t been deterred by dozens or hundreds of cameras already present in retail outlets.
Whether it has any effect at all won’t stop companies from buying cameras, so the leader in law enforcement body camera outfitting has already started positioning itself to grab a majority of this new market.
Axon Enterprise, which owns Taser and primarily develops technology and products for police, launched a “Body Workforce” camera this year for retail and health care workers.
These cameras are lighter than ones Axon develops for police offers because they don’t record for as long and require as much a battery life, Axon President Joshua Isner said at an analyst conference last month. They are also a more “inviting product, instead of more of like a militaristic” camera worn by police, he said.
And so it goes. Cops are looking more and more like military personnel. Mall cops are looking more like real cops. And mall personnel are well on their way to looking like mall cops. If nothing else, the body cam industry can expect healthy year-over-year increases, even if their retail customers continue to struggle.