Cellebrite Dumps AI Into Its Cell Phone-Scraping Tool So Cops Can Hallucinate Evidence

from the why-does-anyone-need-this? dept

I honestly don’t understand this compunction to break things that are already working fine. Axon makes body cameras (and Tasers!), but it simply wasn’t enough to equip cops with cameras and cop shops with expensive service contracts. No, the company insisted the way forward was dropping AI into existing tech so robots could start doing the boring cop paperwork.

Cellebrite makes tools that crack seized phones and scrape everything out of them for perusal by investigators and prosecutors. But just being able to do that wasn’t good enough. Now, as Joseph Cox reports for the always-essential 404 Media, Cellebrite is going to make its existing product more chaotic by adding “intelligence” more often known for what it gets wrong, rather than what it gets right.

Cellebrite has introduced AI specifically into Guardian, which is a software-as-a-service “evidence management solution,” the company says. In practical terms, Guardian is a piece of software for analyzing evidence already in a police officer’s possession.

According to Cellebrite’s February 6 announcement, the company’s generative AI capabilities can summarize chat threads “to help prioritize which threads may be most relevant,” contextualize someone’s browsing history to show what was searched for, and build “relationship insight.”

Well, that’s no good. The first problem is that AI isn’t exactly great at contextualizing data or conversations. And other problems can develop based on prompts given by investigators. At some point, someone’s going to get hallucinated right into a lengthy prison sentence they haven’t earned. It’s not a matter of “if.” It’s a matter of when.

But the most immediate problem is this: cops are already looking at this AI as a way to sniff out criminal activity they’re not even investigating. The press release from Cellebrite contains a quote from a police official who heads a force overseeing a town with [squints at Wikipedia page] 1,365 residents.

“It is impossible to calculate the hours it would have taken to link a series of porch package thefts to an international organized crime ring,” said Detective Sergeant Aaron Osman with Susquehanna Township, Pennsylvania Police Department, who recently piloted the solution. “The GenAI capabilities within Guardian helped us translate and summarize the chats between suspects, which gave us immediate insights into the large criminal network we were dealing with.”

It is impossible to calculate. It’s also apparently impossible to report. There doesn’t seem to be any information on this major break (in what initially appeared to be a minor case) contained anywhere on the PD’s press page, much less anywhere else on the internet. I’m not saying the detective is lying, because lying is usually done to serve the person doing the lying, rather than a third-party that probably assumes no one but other PR people are reading their press releases, but I can’t find anything that supports this assertion.

If anything, Sergeant Osman is probably overstating the results of the GenAI-assisted phone search, if only because he’s flattered someone from Cellebrite thought a small Pennsylvania town would be the best place to do a trial run of its new tech.

The largest problem isn’t the AI itself, though. It’s what the AI does, which has the potential to generate constitutional collateral damage. Performing a targeted search via human interaction is one thing. Allowing software to just go blundering around in the scraped contents of a seized phone is quite another, as ACLU lawyer Jennifer Granick stated to 404 Media:

“The Fourth Amendment does not permit law enforcement to rummage through data, but only to review information for which there is probable cause. To use an example from the press release, if you have some porch robberies, but no reason to suspect that they are part of a criminal ring, you are not allowed to fish through the data on a hunch, in the hopes of finding something, or ‘just in case.’

That’s a problem courts will need to confront. Chances are, that won’t be any time soon. There’s almost zero chance magistrate judges are being informed AI will be used to search seized phones when cops request search warrants. And there’s even less chance defendants will be informed the search of their phone was half-algorithm, especially when doing so might give defendants the ability to challenge the evidence being used against them.

When it does finally bubble to the judicial surface, will courts consider AI-assisted searches just another version of “inevitable discovery?” Or will they see this for what it is: something clearly not predicted by the creators of the Fourth Amendment, nor something that’s covered by current court precedent.

Filed Under: , , , ,
Companies: cellebrite

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Cellebrite Dumps AI Into Its Cell Phone-Scraping Tool So Cops Can Hallucinate Evidence”

Subscribe: RSS Leave a comment
6 Comments
Terr (profile) says:

Re:

I worry less about evidence exposed to an adversarial trial and more about how one piece of evidence will create a false pretext to search for more, which probably won’t be suppressed since the police will nudge-nudge-wink-wink be acting “in good faith.”

There’s also how faked evidence could be cited to pressure someone into a plea deal, but police in the US are already allowed to tell giant lies if it means securing a confession.

Terr (profile) says:

Fraudulently fabricate probable cause... with a computer!

Soon: The software is going to hallucinate something which “justifies” an arrest, search, or warrant, and then police are going to claim they are in the clear since it was “in good faith”.

As it gets trained on similar content, it will do so more and more often, and police will be more-and-more pleased with the “mistakes” in their favor.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...