London Police Move Forward With Full-Time Deployment Of Facial Recognition Tech That Can't Accurately Recognize Faces

from the don't-let-the-fact-that-it-doesn't-work-stop-you dept

The London Metropolitan Police sure loves its facial recognition tech. But it’s an unrequited love. The tech doesn’t appear to have done anything for the Met during its past deployments.

Documents obtained by Big Brother Watch in 2018 showed the Met’s deployments had rung up a 98% false positive rate in May of that year. Nothing improved as time went on. Subsequent documents showed a false positive rate of 100%. Every “match” was wrong. Not exactly the sort of thing you want to hear about tech capable of scanning 300 faces per second.

This followed an earlier report covering a test run by the South Wales Police at a handful of public events. In comparison, the South Wales tests were a success: a mere 92% of its matches were false positives.

The Met’s tech showed some slight improvement in 2019, moving up to a 96% false positive rate. This continued failure to recognize faces — along with a number of privacy concerns — prompted a UK Parliamentary Committee to call for an end of the use of facial recognition tech by UK government agencies. This advice was ignored by the Home Office, which apparently believed UK law enforcement would be able to fail upwards towards a brave new world of facial recognition tech worth the money being spent on it.

We’ve apparently reached that inflection point. Test runs are a thing of the past. It’s time for Londoners to put their best face forward.

British police are to start operational use of live facial recognition (LFR) cameras in London, despite warnings over privacy from rights groups and concerns expressed by the government’s own surveillance watchdog.

First used in the capital at the Notting Hill carnival in 2016, the cameras will alert police when they spot anyone already on “wanted” lists.

“The use of live facial recognition technology will be intelligence-led and deployed to specific locations in London,” the city’s Metropolitan Police said in a statement.

“Intelligence-led,” says the agency that has so far only managed to incorrectly identify people almost 100% of the time. There’s more “intelligence” further on in the article when the Met says the software that’s hardly managed to correctly identify people will help “identify and apprehend suspects.” Gun and knife crime top the list of things expected to be curtailed by unproven tech, followed by the sexual abuse of children and “protecting the vulnerable.”

Also lol a bit at this, which uses a trite phrase made even triter by the abysmal performance of the Met’s AI:

Metropolitan Police Assistant Commissioner Nick Ephgrave said in a statement: “We are using a tried-and-tested technology, and have taken a considered and transparent approach in order to arrive at this point.”

He’s technically correct. It’s has been tried and tested. What it hasn’t been is accurate and that’s what counts most when people’s rights and freedoms are on the line. But better an unknown number of innocent people be misidentified than allow a single suspect to go unscanned, I guess.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “London Police Move Forward With Full-Time Deployment Of Facial Recognition Tech That Can't Accurately Recognize Faces”

Subscribe: RSS Leave a comment
19 Comments
This comment has been deemed insightful by the community.
PaulT (profile) says:

"But better an unknown number of innocent people be misidentified than allow a single suspect to go unscanned, I guess."

When you understand all the parameters, pretty much. Here’s my evaluation – there’s a newly confirmed Tory government. Among the Tories’ many issues are a tendency to be the party of "law and order" without addressing any of the fundamental issues underlying any crime problems, especially in inner cities. They also have a tendency to launch expensive but useless IT projects to be sold to the bidder with the closest connections to some Tory MP. Then, there’s this:

https://www.theguardian.com/law/2019/may/04/stop-and-search-new-row-racial-bias

So, the most likely explanation is that some public school chum of one of Boris’ minions stands to profit from a meaty contract, while the Met needs an excuse to stop and search whoever they want without being called racist. Enter a flawed technology that incorrectly identifies nearly anyone, and they have the perfect excuse. "We’re not racist, the computer told us to search you", forgetting to mention that the tech is telling them to search everyone..

This comment has been deemed insightful by the community.
Anonymous Coward says:

Friday deep thoughts:

"There’s 3 types of people in this world: those who CAN count, and those who CAN’T"

-surveillance doesn’t prevent crime, it records it
-just look at the crime stats in London…and they’ve been under surveillance since, at least, the 1980’s IRA bombings

peter says:

The problem is not with the tech

The problem is not with the false positives, but what they do with them.

Because then you are into the whole "We got a match so you are being detained and searched under Section 60. And if you cannot give us ID we are arresting you to determine who you are.". Multiply that by potentially thousands of innocent pissed off people, and you are starting to get a measure of how badly this can go.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: The problem is not with the tech

The good news is that if the false positive rate is 92 percent, they probably won’t get into the habit of going apeshit every time they get a match.

The bad news is that if they get the false positive rate down to 1 percent, those 1 percent are going to be really fucked.

Better a high error rate than a low-but-not-zero error rate.

This comment has been deemed insightful by the community.
Anonymous Coward says:

a false positive rate of 100%. Every "match" was wrong. Not exactly the sort of thing you want to hear about tech capable of scanning 300 faces per second.

That’s a really dumb thing to say. The issue isn’t how many faces it scans. The issue is how many actual matches it produces for humans to dig through.

If it scans 10,000,000,000 faces and produces 1 match that’s wrong, that’s a 100 percent false positive rate… but it is not a problem in the way that you are trying to suggest. Obviously not producing any good matches at all is a problem, but it’s a different kind of problem.

92% of its matches were false positives

96% false positive rate

Those could be perfectly acceptable numbers.

Suppose that you had 100 matches, each of which required 5 minutes of somebody’s time to check manually. If 96 of those matches were false, and 4 of them were true matches for axe murderers, which allowed you to locate and apprehend those axe murderers, then that would be an extremely successful system. You would have caught 4 axe murderers with a day and a half of work.

Obviously these systems aren’t going to be providing that kind of dramatic result, but it’s just innumerate nonsense to claim that even a 99 percent false positive rate is inherently bad. The false positive rate of looking intensively at everybody would be even higher.

But that’s not the real problem with that argument. The real problem is that it distracts from the fact that these systems would be far more dangerous if they WORKED*.

It’s not about the error rate. It’s about the abuse rate, and even more than that it’s about the horrible psychological and social effects of a world in which Authority Knows All and No Transgression Goes Unpunished. A world with widely deployed facial recognition is a dystopia even if the error rate is zero.

If you constantly talk about the error rate, then when Big Brother manages to get the error rate down, or if Big Brother simply calls you on the patent invalidity of your specious pseudo-statistical arguments, you are going to be left with nothing to say.

This is a constant problem on Techdirt. It seems like none of the writers here are willing to bite the bullet and say that these systems are bad in their goals, so they end up saying things that make no sense instead.

Anonymous Coward says:

Re: Re: Re:2 Re:

The computer puts up an alert saying "possible person of interest detected on camera number 1234", with the best face shot from the camera video, a link to watch the whole video, and the best matching shots of that person in the database.

The user then looks to see if they agree with the computer that the person in the video is the person in the database. If they do agree, they dispatch somebody or something.

… or more likely the hit just gets quietly logged, and the person doesn’t even look at the mug shots until later, or unless there are multiple hits for that individual in that area.

What do YOU think happens? The camera deploys a shotgun and kills the person?

Scary Devil Monastery (profile) says:

Re: Re: Re:3 Re:

"What do YOU think happens? The camera deploys a shotgun and kills the person?"

Once the camera shot gets linked to the successor of what automated profiling eventually becomes and politicians get it into their heads that you can predict which person is a murderer without having that person actually having to kill someone first, then yes. Modern phrenology – the urge to identify the Bad Guy(TM) at the simple press of a button, will have at least a few authoritarian regimes investigating the option. Not as clean-cut as the "selbstshussanlagen" at the DDR border, but I doubt China, for instance, will hesitate to roll out a program like that all over Xinjiang.

What is actually going to happen here in the more democratic west, is that a SWAT team will be alerted that Abu "Baby-eating bomber man" wanted on multiple counts of terrorism, has just been spotted rolling a small trolley disguised as a baby carriage down the street towards parliament. No one’s going to take the time to actually check whether it’s a false positive or not.

Once the proud new parent taking his offspring for a walk has had his afternoon ruined by a few 5.56 mm long rifle rounds through his skull it’ll be just another lamentable case of "Jean Charles de Menezes".

Sok Puppette says:

Re: Re: creative makeup

https://www.documentjournal.com/2020/01/anti-surveillance-makeup-could-be-the-future-of-beauty/

I happened to be playing with the AWS Rekognition demo the other day, and I fed it a bunch of makeup jobs from the CV dazzle site, as well as various other images with "countermeasures" from around the Web.

Given a nice clear picture, it found every single face and every single feature on every face. It also did a good job of identifying age, sex and mood, right through some pretty extreme makeup. Try it out. It’s available to the public.

The problem with the countermeasures is that you never know whether the other guy has out-evolved you.

By the way, the good think about Rekognition was that it seems to be crap at actually identifying faces from large groups.

They have a celebrity recognition demo, and it did very poorly on pictures lots of people who are in the headlines… including people who ARE in the database. It spotted Marilyn Monroe in one of her really iconic shots, but not in another perfectly clear shot that it presumably hadn’t been trained on. Same thing for Einstein. Turning to the headlines, it misidentified Alexandra Ocasio-Cortez and Greta Thunburg as random minor celebrities I’d never heard of. In turn it identified random minor celibrities, like members of current boy bands, as different random minor celebrities. It does well on heads of state. And both new and very old pictures of Elizabeth II worked. It may also be OK on Really Big Stars of Today (TM). But that’s about it.

So I assume it won’t really identify a random picture as belonging to somebody in a collection unless said collection has a lot of good, similar pictures of that same person.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

'Are we being flagged and searched? No? No problem then.'

Hardly a surprise they’d be eager to deploy laughably broken tech, it’s not like it’s going to flag them as crooks after all, and even if they are they’re not going to arrest their own, so it’s a win-win really. They get an excuse to search basically anyone they want and/or use the general public to test/improve the tech, and the public being watched pays for the whole thing.

Anonymous Coward says:

100% false negatives AND 100% false positives.

They’d get a more accurate result opening Excel and putting =int(rand(100)*100)

Basically some highup police in the Met have stolen ALL the funding for the facial recognition software and it’s now just randomly saying Yes or No to a suspect and not actually processing ANY face recognition code on the server….

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...