Federal Watchdog Finds Lots Of Facial Recognition Use By Gov't Agencies, Very Little Internal Oversight

from the getting-a-real-'Wild-West'-vibe-from-this dept

Facial recognition tech remains controversial, given its tendency to produce false positives and send cops after the wrong people. Private companies offering even sketchier tech than what’s already in use (looking at you, Clearview) have made everything worse.

The upside is this state of affairs has prompted at least one federal government oversight entity to do some actual oversight. The Government Accountability Office (GAO) has released its report [PDF] on federal agencies’ use of facial recognition tech and it contains a couple of surprises and, unfortunately, several of the expected disappointments. (via the Washington Post)

For instance, while we expect law enforcement agencies like the FBI, DEA, ATF, and TSA to use facial recognition tech, the report notes that a total of 20 agencies own or use the tech. That list also includes some unexpected agencies, like the IRS, US Postal Service, the FDA, and NASA.

There’s also a surprising number of Clearview users among federal agencies, which seems unwise given the company’s history for being sued, investigated, exposed as dishonest, and just kind of terrible in every way. Of the 20 agencies that admitted using this tech, ten have used or have contracts with Clearview, outpacing other third-party offerings by a 2-to-1 margin.

What are these agencies using this tech for? Mainly criminal investigations.

According to the FBI, the system has been used for investigations of violent crimes, credit card and identity fraud, missing persons, and bank robberies, among others. The Department of Homeland Security’s Office of Biometric Identity Management offers a similar service to its partners (e.g., U.S. Immigration and Customs Enforcement). Specifically, the agency’s Automated Biometric Identification System can be used to search a photo of an unknown individual and provide potential matches (i.e., generate leads) to support criminal investigations. Federal agencies also reported using state, local, and non-government systems to support criminal investigations.

This includes people who may have committed criminal acts during last summer’s nationwide anti-police violence protests. One of the agencies on this list is the US Postal Inspection Service, which used Clearview to identify suspects who damaged USPS property or stole mail. The US Capitol Police also used Clearview to “generate leads” following the January 6th attack on the US Capitol.

That’s what’s known. There’s a lot that’s unknown, thanks to federal agencies apparently not caring who’s doing what with whatever facial recognition tech they have access to.

Thirteen federal agencies do not have awareness of what non-federal systems with facial recognition technology are used by employees. These agencies have therefore not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy. Most federal agencies that reported using non-federal systems did not own systems. Thus, employees were relying on systems owned by other entities, including non-federal entities, to support their operations.

Yay! Your federal tax dollars at work putting citizens at risk of being misidentified right into holding cells or deportation or whatever. The less you know, I guess. Some agencies had to “poll” employees to figure out how often this tech had been used, something that relies on honest self-reporting for accuracy. Literally any other system would provide better data, including the old standby “making some shit up.”

Then there’s mind-boggling stuff like this:

Officials from another agency initially told us that its employees did not use non-federal systems; however, after conducting a poll, the agency learned that its employees had used a non-federal system to conduct more than 1,000 facial recognition searches.

The line between “we don’t do this” and “we do this pretty much nonstop” is finer than I thought.

The CBP, which has used this tech for years, says it’s still “in the process of implementing a mechanism to track” use of non-federal facial recognition systems for employees. So far, the CBP has come up with nothing better than hanging up a couple of clipboards.

According to U.S. Immigration and Customs Enforcement officials, in November 2020 they were in the process of developing a list of approved facial recognition technologies that employees can use. In addition, log-in sheets will be made available to employees, allowing supervisors to monitor employee use of the technologies.

Behold the awesome power of the CBP, utilizing its billions in budget to send someone to Office Depot with a $20 bill and telling them to bring back change and a receipt.

In addition to being careless and cavalier about the use and deployment of unproven tech, the sullen shrugs of these thirteen government agencies are also possibly admissions of criminal activity.

When agencies use facial recognition technology without first assessing the privacy implications and applicability of privacy requirements, there is a risk that they will not adhere to privacy-related laws, regulations, and policies. There is also a risk that non-federal system owners will share sensitive information (e.g. photo of a suspect) about an ongoing investigation with the public or others.

The GAO closes its depressing report with 26 recommendations — thirteen of them being “start tracking this stuff, you dolts.” The second — which makes two recommendations per failing federal agency — is to assess the risks of the tech, including possible violations of privacy laws and the negative side effects of these systems misidentifying people.

There’s no good news in this report. Agencies are using unproven, sometimes completely unvetted tech without internal or external oversight. They’ve rolled out these programs well ahead of required Privacy Impact Assessments or internal tracking/reporting measures in place. The only pleasant surprise is that this hasn’t resulted in more false arrests and detainments. But that definitely can’t be attributed to the care and diligence of agencies using this tech because the GAO really wasn’t able to find much evidence of that. But this does put the issue on the radar of Congress members who haven’t been paying much attention to this tech’s drift towards ubiquity.

Filed Under: , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Federal Watchdog Finds Lots Of Facial Recognition Use By Gov't Agencies, Very Little Internal Oversight”

Subscribe: RSS Leave a comment
9 Comments
Anonymous Coward says:

I wouldn’t have a problem with facial recognition tech being used if law enforcement wasn’t so amazingly bad at jumping from "We have no idea who committed this crime" to "We absolutely know it’s this person because of ‘intuition.’" And more often than not, that person is going to be someone belonging to a marginalized group. The whole reason we need the fourth amendment, and other privacy protections in general, is because of how bad people are at jumping to conclusions. If only we could take their mats away.

Anonymous Coward says:

no surprise here then! tell me exactly which ‘government service’ actually has true and proper oversight. i’ll bet there isn’t a one! the USA is not the nation it once was. too many people in powerful positions are as corrupt as they can be, have no intention of changing, (other than to become more corrupt) with ensuring that only they and their friends can do whatever they please and the people get screwed into the ground! laws are being ignored or changed so that our freedom and privacy are thrown out the window and any attempt to find out anything about those ‘elite’ is hampered as much as possible with severe consequences for everyone who finds out what the fuckers are really up to!! you only have to look at the strangle hold the telecoms, internet companies and entertainment industries have on everyone/thing, from the government, regardless of the party in the Whitehouse, to the entire country! as for ‘Land of the Free, Home of the Brave’ that just doesn’t exist anymore!

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Ellena Rebecca MCMANUS says:

Non Fault Accident Claims

https://premier-service.co.uk/non-fault-accident-management/

We are an accident management company for drivers who have been involved in non-fault incidents. We bring together a dedicated team of professionals, across a range of sectors, including legal experts, damage assessors and body shops, to get you back on the road quickly and efficiently. However, the recovery process isn’t just about your vehicle, your personal recovery is just as important to us.

Tanner Andrews (profile) says:

Qualified Immunity Does Not Help

When they arrest the wrong person based on an obviously wrong “match” by the Facial Recognition Technolog (FRT, or “fart”), the person or his estate will likely bring suit for all the usual things. The courts will dismiss, because there is no clearly established case law that arresting and beating the wrong person is in any wise contrary to the Constitution.

They will claim good faith based on the FRT, even though some of the matches may appeared to be pulled from the machine’s nether regions. The level of good faith for cops is stunningly low. Where you or I might, for instance, think that having a warrant for one address should not allow violent entry into another, the courts find good faith and qualified immunity. A fortiori, where the machine says “match”, emitting a gaseous cloud of suspicion along with its report.

sumgai (profile) says:

They’ve rolled out these programs well ahead of required Privacy Impact Assessments or internal tracking/reporting measures in place.

And what would make anyone think that any of these agencies will willingly "reset" back to where they will first assess these risks? Their position will be "Hey, we’ve done just fine so far without all this folderol, why should we worry about it now?" Or, "That’s too expensive", always a favorite government bug-a-boo.

But this does put the issue on the radar of Congress members ….

You’d think that it’d be of particular interest to a certain 28 members ….

Eldakka (profile) says:

There are different uses of facial recognition tech, and I suspect that some of the agencies that ‘use’ facial recognition tech could be caught up in the "do you use facial recognition tech" question.

For example, to me at least, it would seem perfectly reasonable for an Access Control System for, say, NASA’s facilities to use facial recognition tech. Say a staff member walks into the security gate/entrance at JPL and swipes/waves/whatever they do with their ID card to enter the building. Having a facial recognition tech camera automatically pull up the swipee’s staff personel record’s biometric image and compare that to the image the camera has just taken to confirm the person swiping is the same person the employee records indicate, seems reasonable to me (assuming it actually works!).

Leave a Reply to Anonymous Coward Cancel reply

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...