Facial Recognition Software That Returns Incorrect Results 20% Of The Time Is Good Enough For The FBI
from the 80%-of-the-time,-it-works-EVERY-time dept
When deploying technology that has the potential to put actual human beings behind bars, what should be the acceptable margin of error? Most human beings, especially those who haven’t committed any crime due to their natural aversion to being housed with actual criminals, would prefer (as if they had a choice) this number to be as close to zero as humanly (and technologically) possible.
The FBI, on the other hand, which possesses the technology and power to nudge people towards years of imprisonment, apparently feels a one-in-five chance of bagging the wrong man (or woman) is no reason to hold off on the implementation of facial recognition software.
Documents acquired by EPIC (Electronic Privacy Information Center) show the FBI rolled out a ton of new tech (under the name NGI — “Next Generation Identification”) with some very lax standards. While fingerprints are held to a more rigorous margin of error (5% max — which is still a 1-in-20 “acceptable” failure rate), facial recognition is allowed much more leeway. (The TAR [True Acceptance Rate] details begin on page 247.)
NGI shall return the correct candidate a minimum of 85% of the time when it exists in the searched repository, as a result of facial recognition search in support of photo investigation services.
NGI shall return the incorrect candidate a maximum of 20% of the time, as a result of facial recognition search in support of photo investigation services.
The FBI’s iris recognition program is subjected to a similar lack of rigor.
NGI shall return the correct candidate a minimum of 98% of the time when it exists in the searched repository, as a result of iris recognition search in support of iris investigation services.
NGI shall return the incorrect candidate a maximum of 10% of the time, as a result of iris recognition search in support of iris investigation services.
These documents date back to 2010, so there’s every reason to believe the accuracy of the software has improved. Even so, the problem is that the FBI decided potentially being wrong 20% of the time was perfectly acceptable, and no reason to delay implementation.
Presumably, the FBI does a bit more investigation on hits in its NGI database, but it’s worrying that an agency like this one — one that hauls people in for statements wholly dependent on an FBI agent’s interpretation (the FBI remains camera-averse and uses its own transcriptions of questioning as evidence) — would so brazenly move forward with tech that potentially could land every fifth person in legal hot water, simply because the software “thought” the person was a bad guy.
Making this worse is the fact that the FBI still hasn’t updated its 2008 Privacy Impact Assessment, despite the fact it told Congress in 2012 that it had a new assessment in the works.
On top of the brutal (but “acceptable”) margin of error is the fact that the FBI has made a habit of deploying nearly every form of privacy-invasive technology without putting together even the most minimal of guidelines or privacy-aware policies. Apparently, these concerns only need to be dealt with when and if they’re pointed out by OIG reports or lawsuits brought by privacy advocates.