ChatGPT Is Pitching In To Help Federal Officers Misrepresent Confrontations With Protesters
from the selective-recall dept
We’ve already written several times about the danger posed by adding AI to law enforcement incident/arrest reports. There are a lot of obvious problems, ranging from AI misinterpreting what it’s seeing to it adding so much unsupported gibberish it isn’t the time-saver companies like Axon (formerly Taser) claim it will be.
The cost-effectiveness of relying on AI is pretty much beside the point, at least as far as the cops are concerned. This is the wave of the future. Whatever busywork can be pawned off on tireless AI tech will be. It will be up to courts to sort this out, and if a bot can craft “training and expertise” boilerplate, far too many judges will give AI-generated police reports the benefit of the doubt.
The operative theory is that AI will generate factual narratives free of officer bias. The reality is the opposite, for reasons that should always have been apparent. Garbage in, garbage out. When law enforcement controls the inputs, any system — no matter how theoretically advanced — will generate stuff that sounds like the same old cop bullshit.
And it’s not just limited to the boys in blue (who are actually now mostly boys in black bloc/camo) at the local level. The combined forces of the Trump administration’s anti-migrant efforts are asking AI to craft their reports, which has resulted in the expected outcome. The AP caught something in Judge Sara Ellis’s thorough evisceration of Trump’s anti-immigrant forces as they tried to defend the daily constitutional violations they engaged in — many of which directly violated previous court orders from the same judge.
Contained in the 200+ page opinion [PDF] is a small footnote that points to an inanimate co-conspirator to the litany of lies served up by federal law enforcement in defense of its unconstitutional actions:
Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests.
U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines agents’ credibility and “may explain the inaccuracy of these reports.” She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief description and several images.
The judge noted factual discrepancies between the official narrative about those law enforcement responses and what body camera footage showed.
AI is known to generate hallucinations. It will do this more often when specifically asked to do so, as the next sentence of this report makes clear.
But experts say the use of AI to write a report that depends on an officer’s specific perspective without using an officer’s actual experience is the worst possible use of the technology and raises serious concerns about accuracy and privacy.
There’s a huge difference between asking AI to tell you what it sees in a recording and asking it to summarize with parameters that claim the officer was attacked. The first might make it clear no attack took place. The second is just tech-washing a false narrative to protect the officer feeding these inputs to ChatGPT.
AI — much like any police dog — lives to please. If you tell it what you expect to see, it will do what it can to make sure you see it. Pretending it’s just a neutral party doing a bit of complicated parsing is pure denial. The outcome can be steered by the person handling the request.
While it’s true that most law enforcement officers will write reports that excuse their actions/overreactions, pretending AI can solve this problem does little more than allow officers to spend less time conjuring up excuses for their rights violations. “We can misremember this for you wholesale” shouldn’t be an unofficial selling point for this tech.
And I can guarantee this (nonexistent) standard applies to more than 90% of law enforcement agencies with access to AI-generated report-writing options:
The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents.
“Unclear” means what we all assume it means: there are no guidelines or policies. Those might be enacted at some point in the future following litigation that doesn’t go the government’s way, but for now, it’s safe to assume the government will continue operating without restrictions until forced to do otherwise. And that means people are going to be hallucinated into jail, thanks to AI’s inherent subservience and the willingness of those in power to exploit whatever, whenever until they’ve done so much damage to rights and the public’s trust that it can no longer be ignored.
Filed Under: ai, border patrol, cbp, chatgpt, chicago, doj, ice, mass deportation, sara ellis, trump administration
Companies: openai


Comments on “ChatGPT Is Pitching In To Help Federal Officers Misrepresent Confrontations With Protesters”
I don’t like this idea of “hallucination”, not to mention the ambiguous term (could mean “bugs”, like in all programs).
Most of the time, AI misses a lot of context, like many people if they’ve only got a brief summary and few minutes of videos, and tries to connect dots after watching countless hours of documentaries, movies, TV shows, video music and memes, without been able to differentiate reality from fiction (as many people do).
That’s where cop experience is important, like in so many professions, and AI is still not able to cope with it. Maybe one day, AI could become some average cops, that would help to do the boring reporting part (and never take decisions), but until then, we’ve got to live with broken tools. So just don’t use it if lives depend of it!
Re:
I agree we should not be using hallucination any more. Hallucination is a category error. It implies perception.
Re: Re:
We also ought to stop trying to apply the idea of “intelligence”. Which means that the term “A.I.” should be written in scare-quotes, or not at all.
Unless, of course, an artificial machine actually becomes intelligent. For now, that’s science-fiction. (Terms such as “malicious software” are similarly flawed, for the same reason.)
Of course the cops and their leash holders won’t think so, but using AI for anything official relating to charges or trials should lead to an automatic dismissal of charges. You can’t practically confront software that hallucinates your guilt, so anything it touches should be inadmissible.
@MrWilson > using AI for anything official relating to charges or trials should lead to an automatic dismissal of charges
A government that uses its Depart of Justice to go after its “enemies”, commits warm crimes, doesn’t apply the law fairly equitably, and reguarly lies and refuses to provide transparency (how about that Epstein list) isn’t to be expect to know what or when a dismissal should occur, let alone an automatic one, let alone for violating protocol or standards or even having any.
@Tim > Garbage in, garbage out.
That it is garbage isn’t in question but that the LEO uses it as if it’s mana from Heaven with no Earthly taste as good… is a current administration thing, lacks oversight, accountability, and transparency.
We don’t expect “white-out” to fix our thinking, just remove typos. First-gen spellcheck did the same but it was so much better than white-out. Second-gen was grammar highlight which surely could detected insure vs ensure and affect vs effect but didn’t. Third-gen is ChatpGPT.
It’s not an intelligence, and pretending it is and using that word and creating two new alternate levels (AGI and SGI) pretends that where we are today we HAVE an intellilgence.
We do not. We have a third-gen spell and grammar checker 100% based on crap already published by others. It’s like putting a line cook next to a Cordeon Blue and saying “turn out food like he does.”
Everyone who insists on using the term AI because “it’s too hard to call it an LLM or explain the difference” contributes to the overall stupidity of the users who expect an actual intelligence.
That includes lawyers, judges, websites, magazines, and police reports.
The problem has become so ingrained the assumption now is a begged question and thusly accepted and ignored.
Guidelines (not hard rules, or regulations, or god forbid; laws) will be enacted when it makes for a good show, and long after AI-abuse culture is thoroughly embedded in any relevant agency or department.
…Where have you seen police in a black bloc?
If 'No using AI at all you lazy schmucks' isn't an option...
How about a trade, cops get to use AI to write their reports, and any time an AI gets something wrong it’s treated as the cop in question knowingly lying to the court, leading to at a minimum all charges against the accused dismissed with prejudice since if they’re using AI they couldn’t have cared that much in the first place.
If AI is good enough to condemn the accused then it’s good enough to see those charges tossed and new ones filed against a cop when they choose to be lazy and their tool ‘lies’ on their behalf in court.