Facial Recognition Software That Returns Incorrect Results 20% Of The Time Is Good Enough For The FBI

from the 80%-of-the-time,-it-works-EVERY-time dept

When deploying technology that has the potential to put actual human beings behind bars, what should be the acceptable margin of error? Most human beings, especially those who haven’t committed any crime due to their natural aversion to being housed with actual criminals, would prefer (as if they had a choice) this number to be as close to zero as humanly (and technologically) possible.

The FBI, on the other hand, which possesses the technology and power to nudge people towards years of imprisonment, apparently feels a one-in-five chance of bagging the wrong man (or woman) is no reason to hold off on the implementation of facial recognition software.

Documents acquired by EPIC (Electronic Privacy Information Center) show the FBI rolled out a ton of new tech (under the name NGI — “Next Generation Identification”) with some very lax standards. While fingerprints are held to a more rigorous margin of error (5% max — which is still a 1-in-20 “acceptable” failure rate), facial recognition is allowed much more leeway. (The TAR [True Acceptance Rate] details begin on page 247.)

NGI shall return the correct candidate a minimum of 85% of the time when it exists in the searched repository, as a result of facial recognition search in support of photo investigation services.

NGI shall return the incorrect candidate a maximum of 20% of the time, as a result of facial recognition search in support of photo investigation services.

The FBI’s iris recognition program is subjected to a similar lack of rigor.

NGI shall return the correct candidate a minimum of 98% of the time when it exists in the searched repository, as a result of iris recognition search in support of iris investigation services.

NGI shall return the incorrect candidate a maximum of 10% of the time, as a result of iris recognition search in support of iris investigation services.

These documents date back to 2010, so there’s every reason to believe the accuracy of the software has improved. Even so, the problem is that the FBI decided potentially being wrong 20% of the time was perfectly acceptable, and no reason to delay implementation.

Presumably, the FBI does a bit more investigation on hits in its NGI database, but it’s worrying that an agency like this one — one that hauls people in for statements wholly dependent on an FBI agent’s interpretation (the FBI remains camera-averse and uses its own transcriptions of questioning as evidence) — would so brazenly move forward with tech that potentially could land every fifth person in legal hot water, simply because the software “thought” the person was a bad guy.

Making this worse is the fact that the FBI still hasn’t updated its 2008 Privacy Impact Assessment, despite the fact it told Congress in 2012 that it had a new assessment in the works.

On top of the brutal (but “acceptable”) margin of error is the fact that the FBI has made a habit of deploying nearly every form of privacy-invasive technology without putting together even the most minimal of guidelines or privacy-aware policies. Apparently, these concerns only need to be dealt with when and if they’re pointed out by OIG reports or lawsuits brought by privacy advocates.

NGI System Requirements

Filed Under: , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Facial Recognition Software That Returns Incorrect Results 20% Of The Time Is Good Enough For The FBI”

Subscribe: RSS Leave a comment
38 Comments
Anonymous Coward says:

85% of the time IF THE PERSON IS THERE, it finds them !!!!

Damn that is AWESOME !!

So, 85% for ANY ONE TIME, and the person walks past 6 cameras, what are the odds that ONE of them will detect him ?

So the camera misses him when he walks in, what about next time when he walks out, or do you think it would not search that face again, or would not even search each face multiple times with each scan (or frame), each one having an 85% chance of his detection.

See what I am coming from here.

PaulT (profile) says:

Re: Re:

So, as long a potential terrorist keeps walking past monitored cameras, eventually you’ll get him? What a great system, well worth the cost and loss of privacy for innocent citizens!

Sarcasm aside, you seem to miss this important part of the quote: “NGI shall return the incorrect candidate a maximum of 20% of the time”

In other words, 1 out of every 5 times, it’s not that the system has failed to detect the person being looked for, it’s that it identifies the wrong person. Now, consider not only the problems with sending (presumably) armed officers against innocent people, but the cost and wasted time involved in sorting out the incorrect data. Is it really worth it at this point compared to more traditional policing?

Chris-Mouse (profile) says:

Re: You've got the statistics backwards.

15% of the time the software will incorrectly identify a terrorist as an innocent person. From a security standpoint, that’s not a problem, the next camera will catch them. The problem is that 20% of the time the system will flag an innocent person as a terrorist. That’s a massive problem, and here’s why. Boston’s Logan airport handles about 2.5 million passengers every month. 20% of that is about 500,000 false alarms every month, or one false alarm about every five seconds 24 hours a day, seven days a week. Just how long do you think the security people will put up with that before they start treating every alarm as a false alarm, and ignoring it totally?
This facial recognition system needs an error rate of more like 0.00002% before it’s going to be much use for anything.

Anonymous Coward says:

Re: Re: You've got the statistics backwards.

It wouldn’t be much of an alarm so much as a bias-list. Which of course takes your attention off of the 15%. Although there is reason to be concerned about it. While there are perfectly sensible ways of using tools like these the security state is manifestly /not/ sensible, with a level of hysteria that would be an insult to inaccurate 19th century stereotypes of women.

Anonymous Coward says:

Re: Re: Re: You've got the statistics backwards.

Some surveillance cameras come with a weapon attached, its called a Hellfire missile, and they are relying on long range target recognition. The named people killed have an interest in staying ‘dead’ when a mistake is made, as the US stops looking for them, makes you think doesn’t it?

Anonymous Coward says:

Instead of catching “bad guys” why not educate them on ways that they could be on the right side before punishing someone?

Instead of punishing “bad companies” why not help them get on the right side of the law?

I see an emphasis on punishment and little on actually helping solve any problems, to the point I don’t think most people even understand why they are being punished.

Anonymous Coward says:

Uhm. If it rejects a correct hypothesis 15 % of the time it is unproblematic. If it accepts a wrong hypothesis 20 % of the time it is a catastrophy for civil rights if it is the sole evidence! Normally anything below 75 % is considered too random or plain wrong and anything below 90 % efficiency is speculative. Accepting the problematic error-rate to be 20 % is problematic, but hopefully it will be improvable on the software side before becoming standard issue, which makes it less worrying.

I do not understand why they would focus on iris scanning? The only reasonable idea would be for a database of carreer criminals to get recognized at the station. Iris scanning is far too rarely used in civil society to grant significant clues anyway! Fingerprints, DNA and video/foto surveillance will be available on or around most crime scenes. Iris scans will not!

Anonymous Coward says:

I have written both facial and iris recognition programs and the current algorithms currently used in both of these biometric identification systems have about that failure rate.

It really just comes down to a true lack of good algorithms to detect someones Identity. However; I do not see this as a problem as long as a real person then takes the time to confirm the identity.

John Fenderson (profile) says:

Re: Re:

However; I do not see this as a problem as long as a real person then takes the time to confirm the identity.

It depends on how it’s used. If you’re talking about access to special secure areas, OK.

If we’re talking about systems that are surveilling people in public places, though, this is still a really huge problem. Not only because of the huge waste of time & energy incurred by having to have someone verify a person’ identity in person, but because it would be more than a small inconvenience for a lot of innocent people.

What if you’re walking down the street, are pegged as a potential terrorist by a camera, and a cop comes to check you out? First, that’s a terrifying thing for most people right off the bat.

Second, what if you don’t have any identification? Does the cop let you go or haul you into the police station for positive ID? If he just lets you go, then there’s the out for any actual terrorists, and the entire system is instantly worthless.

Or are you arguing that we should be required to have ID on us at all times now? “Your papers, please…”

Chronno S. Trigger (profile) says:

Re: Re: Re:

I would assume that the FBI are only using the system for high security areas or on people they’re already watching. A 5% success rate would just cost far too much if they tried to do this for everybody.

How did I get a 5% success rate, you ask? It’s all in how you fiddle with the numbers. Let me walk you threw it.

There are 316,000,000 people living in the United States. 20% falsely identified is 15,800,000. I don’t know how many people the FBI are looking for, but let’s assume a generous 1,000,000 people. That’s 850,000 people correctly identified. 850,000 people out of 15,800,000 flags. That’s 5.379%.

This does assume a lot of things. One, I really don’t think the FBI is looking for a million people let alone has pictures of all of them clear enough to feed into the software. Two, this assumes that every single person only ever walks past one camera. The success rate drops dramatically as people walk past more cameras.

It would cost far too much money to use a system that in the end probably has a success rate lower then 1%.

Torg (profile) says:

Re: Re: Re:

“What if you’re walking down the street, are pegged as a potential terrorist by a camera, and a cop comes to check you out? First, that’s a terrifying thing for most people right off the bat.”

That would be a very stupid system. It’s simple. Humans are quite good at facial recognition. Someone in your proposed scenario is presumably already being shown a picture of the person that’s been flagged. Just put the picture the camera’s matched them to next to it and let the cop check for any differences that the computer missed. This should keep the rate of false positives at about the same level as when humans just watched the cameras without help.

Anonymous Coward says:

Smells a lot like fear mongering here. Oh no the big bad man is out to get everyone.
Important question here: for what are these results used? It sure the hell isn’t imprisonment. That’s what the broken court system is for. And hey 20% is better than the 100% because you didn’t show “respect” to a cop and guess what, that happens right now.

Anonymous Coward says:

Well, I guess we know how to bring James Clapper to justice then

I guess we know how to bring James Clapper to justice then. We charge him with a different crime committed by someone else that the facial recognition software says was done by him.

I figure we only need to find about 10 crimes with video footage of the crime to get at least 1 match to Clapper, then we can send him to jail for robbing for the gas station! That’s as close as charging him for his illegal spying on America and lying to congress as we’ll get.

dt says:

This is total BS ..

The fact that the system has 80% accuracy doesn’t mean it breaks 20% of the time. It is a legal statement to protect the developer of the software. I have worked on NGI for the FBI.

The tool is not used to arrest anyone anyway. It is used as an investigative tool to combine with other information. Even if our software gets a 99% hit on fingerprints – something that can be beat by wearing gloves – an investigator has to look over the fingerprints and make the final determination. A computer can not go to court and testify.

This whole thing is about one of the stupidest things I have read on techdirt.

aldestrawk says:

Re: This is total BS ..

The ethics of using an ID system that is not completely accurate depends entirely on how it is used. Yeah, 20% for false positives is a maximum and real world usage will likely show better results. If you think the FBI is incapable of intentionally abusing an ID system or making gross mistakes then look at the case of Brandon Mayfield. On the basis of fingerprint identification, and the fact he converted to Islam in the late 1980’s, he was arrested in 2004 and held for over two weeks as a material witness. The FBI first claimed his fingerprints were a 100% match with those found on a bag from the Madrid train bombing. It turned out from the information discovered during the lawsuit brought by Mayfield that there were 20 individuals in the US whose fingerprints were SIMILAR to the one found in Spain. The FBI investigate all of them. Because of Mayfield’s Islamic beliefs he became the prime suspect despite not having left the US in over 10 years. Furthermore, before his arrest, Spanish authorities said his fingerprints were not a match. The FBI disregarded all this and arrested him anyway.

It doesn’t worry me that the FBI is looking to adopt facial recognition and I probably agree with you that this article complains about its accuracy without knowing how it will be used. I am worried about how they will use it. Do not fool yourself into thinking the FBI will not use facial recognition to arrest someone. It may not be the only factor in the arrest but, as with fingerprints, law enforcement tends to be eagerly biased in favor of its usage and tends to disregard what science says about the level of doubt.

J Durocher says:

Completely and totally SDRAWKCAB

FBI agents already manually compare suspect photos to their database. It’s labor intensive. So they put out a spec saying we’d like to purchase some software that will eliminate *at least* 85% of this manual review.

I would like to find some software that eliminates 85% of my busy work, too.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...