London Police Move Forward With Full-Time Deployment Of Facial Recognition Tech That Can't Accurately Recognize Faces

from the don't-let-the-fact-that-it-doesn't-work-stop-you dept

The London Metropolitan Police sure loves its facial recognition tech. But it's an unrequited love. The tech doesn't appear to have done anything for the Met during its past deployments.

Documents obtained by Big Brother Watch in 2018 showed the Met's deployments had rung up a 98% false positive rate in May of that year. Nothing improved as time went on. Subsequent documents showed a false positive rate of 100%. Every "match" was wrong. Not exactly the sort of thing you want to hear about tech capable of scanning 300 faces per second.

This followed an earlier report covering a test run by the South Wales Police at a handful of public events. In comparison, the South Wales tests were a success: a mere 92% of its matches were false positives.

The Met's tech showed some slight improvement in 2019, moving up to a 96% false positive rate. This continued failure to recognize faces -- along with a number of privacy concerns -- prompted a UK Parliamentary Committee to call for an end of the use of facial recognition tech by UK government agencies. This advice was ignored by the Home Office, which apparently believed UK law enforcement would be able to fail upwards towards a brave new world of facial recognition tech worth the money being spent on it.

We've apparently reached that inflection point. Test runs are a thing of the past. It's time for Londoners to put their best face forward.

British police are to start operational use of live facial recognition (LFR) cameras in London, despite warnings over privacy from rights groups and concerns expressed by the government's own surveillance watchdog.

First used in the capital at the Notting Hill carnival in 2016, the cameras will alert police when they spot anyone already on "wanted" lists.

"The use of live facial recognition technology will be intelligence-led and deployed to specific locations in London," the city's Metropolitan Police said in a statement.

"Intelligence-led," says the agency that has so far only managed to incorrectly identify people almost 100% of the time. There's more "intelligence" further on in the article when the Met says the software that's hardly managed to correctly identify people will help "identify and apprehend suspects." Gun and knife crime top the list of things expected to be curtailed by unproven tech, followed by the sexual abuse of children and "protecting the vulnerable."

Also lol a bit at this, which uses a trite phrase made even triter by the abysmal performance of the Met's AI:

Metropolitan Police Assistant Commissioner Nick Ephgrave said in a statement: "We are using a tried-and-tested technology, and have taken a considered and transparent approach in order to arrive at this point."

He's technically correct. It's has been tried and tested. What it hasn't been is accurate and that's what counts most when people's rights and freedoms are on the line. But better an unknown number of innocent people be misidentified than allow a single suspect to go unscanned, I guess.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: facial recognition, london, metropolitan police, police


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    PaulT (profile), 31 Jan 2020 @ 3:24am

    "But better an unknown number of innocent people be misidentified than allow a single suspect to go unscanned, I guess."

    When you understand all the parameters, pretty much. Here's my evaluation - there's a newly confirmed Tory government. Among the Tories' many issues are a tendency to be the party of "law and order" without addressing any of the fundamental issues underlying any crime problems, especially in inner cities. They also have a tendency to launch expensive but useless IT projects to be sold to the bidder with the closest connections to some Tory MP. Then, there's this:

    https://www.theguardian.com/law/2019/may/04/stop-and-search-new-row-racial-bias

    So, the most likely explanation is that some public school chum of one of Boris' minions stands to profit from a meaty contract, while the Met needs an excuse to stop and search whoever they want without being called racist. Enter a flawed technology that incorrectly identifies nearly anyone, and they have the perfect excuse. "We're not racist, the computer told us to search you", forgetting to mention that the tech is telling them to search everyone..

    reply to this | link to this | view in chronology ]

    • icon
      Ninja (profile), 31 Jan 2020 @ 11:03am

      Re:

      I'm not sure if it's the case but this seems the standard operational mode for "conservatives". More authoritarianism and money to their friends.

      Sadly when the opposition reaches power they seem to forget this and enact mechanisms that most certainly will be abused by said conservatives (see Democrats in the US).

      reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 31 Jan 2020 @ 4:00am

    Fail Upwards

    They must believe that they can fall upwards too.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 31 Jan 2020 @ 5:25am

    Friday deep thoughts:

    "There's 3 types of people in this world: those who CAN count, and those who CAN'T"

    -surveillance doesn't prevent crime, it records it
    -just look at the crime stats in London...and they've been under surveillance since, at least, the 1980's IRA bombings

    reply to this | link to this | view in chronology ]

  • identicon
    peter, 31 Jan 2020 @ 5:41am

    The problem is not with the tech

    The problem is not with the false positives, but what they do with them.

    Because then you are into the whole "We got a match so you are being detained and searched under Section 60. And if you cannot give us ID we are arresting you to determine who you are.". Multiply that by potentially thousands of innocent pissed off people, and you are starting to get a measure of how badly this can go.

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 31 Jan 2020 @ 6:16am

      Re: The problem is not with the tech

      The good news is that if the false positive rate is 92 percent, they probably won't get into the habit of going apeshit every time they get a match.

      The bad news is that if they get the false positive rate down to 1 percent, those 1 percent are going to be really fucked.

      Better a high error rate than a low-but-not-zero error rate.

      reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 31 Jan 2020 @ 5:56am

    a false positive rate of 100%. Every "match" was wrong. Not exactly the sort of thing you want to hear about tech capable of scanning 300 faces per second.

    That's a really dumb thing to say. The issue isn't how many faces it scans. The issue is how many actual matches it produces for humans to dig through.

    If it scans 10,000,000,000 faces and produces 1 match that's wrong, that's a 100 percent false positive rate... but it is not a problem in the way that you are trying to suggest. Obviously not producing any good matches at all is a problem, but it's a different kind of problem.

    92% of its matches were false positives

    96% false positive rate

    Those could be perfectly acceptable numbers.

    Suppose that you had 100 matches, each of which required 5 minutes of somebody's time to check manually. If 96 of those matches were false, and 4 of them were true matches for axe murderers, which allowed you to locate and apprehend those axe murderers, then that would be an extremely successful system. You would have caught 4 axe murderers with a day and a half of work.

    Obviously these systems aren't going to be providing that kind of dramatic result, but it's just innumerate nonsense to claim that even a 99 percent false positive rate is inherently bad. The false positive rate of looking intensively at everybody would be even higher.

    But that's not the real problem with that argument. The real problem is that it distracts from the fact that these systems would be far more dangerous if they WORKED*.

    It's not about the error rate. It's about the abuse rate, and even more than that it's about the horrible psychological and social effects of a world in which Authority Knows All and No Transgression Goes Unpunished. A world with widely deployed facial recognition is a dystopia even if the error rate is zero.

    If you constantly talk about the error rate, then when Big Brother manages to get the error rate down, or if Big Brother simply calls you on the patent invalidity of your specious pseudo-statistical arguments, you are going to be left with nothing to say.

    This is a constant problem on Techdirt. It seems like none of the writers here are willing to bite the bullet and say that these systems are bad in their goals, so they end up saying things that make no sense instead.

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 31 Jan 2020 @ 6:29am

      Re:

      All it takes is one false positive (and all that follows) in order to make the entire system viewed as untrustworthy by the public. At this point I doubt they care what the public thinks.

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 31 Jan 2020 @ 11:50am

        Re: Re:

        When the computer throws out a false positive, it's shown to Officer Friendly down at the cop shop. Officer Friendly then compares that picture to a mug shot. If the resemblance isn't good enough to fool a human, then nobody ever hears about that false positive.

        reply to this | link to this | view in chronology ]

        • identicon
          Anonymous Coward, 31 Jan 2020 @ 12:47pm

          Re: Re: Re:

          The computer throws out what exactly?

          reply to this | link to this | view in chronology ]

          • identicon
            Anonymous Coward, 31 Jan 2020 @ 1:24pm

            Re: Re: Re: Re:

            The computer puts up an alert saying "possible person of interest detected on camera number 1234", with the best face shot from the camera video, a link to watch the whole video, and the best matching shots of that person in the database.

            The user then looks to see if they agree with the computer that the person in the video is the person in the database. If they do agree, they dispatch somebody or something.

            ... or more likely the hit just gets quietly logged, and the person doesn't even look at the mug shots until later, or unless there are multiple hits for that individual in that area.

            What do YOU think happens? The camera deploys a shotgun and kills the person?

            reply to this | link to this | view in chronology ]

            • identicon
              Anonymous Coward, 31 Jan 2020 @ 3:05pm

              Re: Re: Re: Re: Re:

              I was curious about the statement "the computer throws out a false positive".

              Is sounds as though the computer was throwing out a false positive, as in it found one and got rid of it. But that is not what you meant, I misread it.

              reply to this | link to this | view in chronology ]

            • icon
              Scary Devil Monastery (profile), 4 Feb 2020 @ 1:15am

              Re: Re: Re: Re: Re:

              "What do YOU think happens? The camera deploys a shotgun and kills the person?"

              Once the camera shot gets linked to the successor of what automated profiling eventually becomes and politicians get it into their heads that you can predict which person is a murderer without having that person actually having to kill someone first, then yes. Modern phrenology - the urge to identify the Bad Guy(TM) at the simple press of a button, will have at least a few authoritarian regimes investigating the option. Not as clean-cut as the "selbstshussanlagen" at the DDR border, but I doubt China, for instance, will hesitate to roll out a program like that all over Xinjiang.

              What is actually going to happen here in the more democratic west, is that a SWAT team will be alerted that Abu "Baby-eating bomber man" wanted on multiple counts of terrorism, has just been spotted rolling a small trolley disguised as a baby carriage down the street towards parliament. No one's going to take the time to actually check whether it's a false positive or not.

              Once the proud new parent taking his offspring for a walk has had his afternoon ruined by a few 5.56 mm long rifle rounds through his skull it'll be just another lamentable case of "Jean Charles de Menezes".

              reply to this | link to this | view in chronology ]

  • identicon
    bobob, 31 Jan 2020 @ 9:20am

    I predict a creative makeup trend is a possibility.

    reply to this | link to this | view in chronology ]

      • identicon
        Sok Puppette, 31 Jan 2020 @ 11:42am

        Re: Re: creative makeup

        https://www.documentjournal.com/2020/01/anti-surveillance-makeup-could-be-the-future-of-beauty/

        I happened to be playing with the AWS Rekognition demo the other day, and I fed it a bunch of makeup jobs from the CV dazzle site, as well as various other images with "countermeasures" from around the Web.

        Given a nice clear picture, it found every single face and every single feature on every face. It also did a good job of identifying age, sex and mood, right through some pretty extreme makeup. Try it out. It's available to the public.

        The problem with the countermeasures is that you never know whether the other guy has out-evolved you.

        By the way, the good think about Rekognition was that it seems to be crap at actually identifying faces from large groups.

        They have a celebrity recognition demo, and it did very poorly on pictures lots of people who are in the headlines... including people who ARE in the database. It spotted Marilyn Monroe in one of her really iconic shots, but not in another perfectly clear shot that it presumably hadn't been trained on. Same thing for Einstein. Turning to the headlines, it misidentified Alexandra Ocasio-Cortez and Greta Thunburg as random minor celebrities I'd never heard of. In turn it identified random minor celibrities, like members of current boy bands, as different random minor celebrities. It does well on heads of state. And both new and very old pictures of Elizabeth II worked. It may also be OK on Really Big Stars of Today (TM). But that's about it.

        So I assume it won't really identify a random picture as belonging to somebody in a collection unless said collection has a lot of good, similar pictures of that same person.

        reply to this | link to this | view in chronology ]

  • icon
    That One Guy (profile), 31 Jan 2020 @ 10:32am

    'Are we being flagged and searched? No? No problem then.'

    Hardly a surprise they'd be eager to deploy laughably broken tech, it's not like it's going to flag them as crooks after all, and even if they are they're not going to arrest their own, so it's a win-win really. They get an excuse to search basically anyone they want and/or use the general public to test/improve the tech, and the public being watched pays for the whole thing.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 31 Jan 2020 @ 11:25am

    100% false negatives AND 100% false positives.

    They'd get a more accurate result opening Excel and putting =int(rand(100)*100)

    Basically some highup police in the Met have stolen ALL the funding for the facial recognition software and it's now just randomly saying Yes or No to a suspect and not actually processing ANY face recognition code on the server....

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Feb 2020 @ 8:15am

    Obvious get rich quick scheme strikes again.

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Close

Add A Reply

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Special Affiliate Offer

Essential Reading
Techdirt Insider Chat
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.