Clearview Finally Submits AI For Independent Testing; Only Tests Feature It Isn't Actually Selling
from the competing-for-a-high-score-no-one-cares-about dept
At long last, Clearview has finally had its AI tested by an independent party. It has avoided doing this since its arrival on the facial recognition scene, apparently content to bolster its reputation by violating state privacy laws, making statements about law enforcement efficacy that are immediately rebutted by law enforcement agencies, and seeing nothing wrong with scraping the open web for personal information to sell to government agencies, retailers, and bored rich people.
Kashmir Hill reports for the New York Times that Clearview joined the hundreds of other tech companies that have had their algorithms tested by the National Institute of Standards and Technology.
[M]ore than two years after law enforcement officers first started using the company’s app, Clearview’s algorithm — what allows it to match faces to photos — has been put to a third-party test for the first time. It performed surprisingly well.
In a field of over 300 algorithms from over 200 facial recognition vendors, Clearview ranked among the top 10 in terms of accuracy, alongside NTechLab of Russia, Sensetime of China and other more established outfits.
That seems to confirm CEO Hoan Ton-That’s continuous claims that Clearview’s AI is one of the most accurate in the business. But there’s a huge caveat to his claims and these test results. This test does not reflect real-world use of Cleaview’s tech.
But the test that Clearview took reveals how accurate its algorithm is at correctly matching two different photos of the same person, not how accurate it is at finding a match for an unknown face in a database of 10 billion of them.
No one’s buying access to Clearview to perform this task. And that certainly isn’t what Clearview is selling or promising to potential customers when it pitches its 10 billion image database. So, Clearview calling this test result “an unmistakable validation” of its tech is, well, pure bullshit. It doesn’t validate anything. All it possibly shows is that Clearview could be used to verify identity by matching faces — something that might be useful for unlocking a phone or providing access to restricted areas.
What it doesn’t show is Clearview’s accuracy when it compares an uploaded photo to its billions of scraped images. Supposedly, Clearview will be allowing NIST to run a 1-to-many test of its AI (Ton-That says that will happen “shortly”). If that happens, we’ll finally be able to see if Clearview’s AI is as accurate as its CEO has repeatedly said it is.
Even if it is accurate, it’s still facial recognition tech — something that comes with a lot of inherent drawbacks. Every AI tested by NIST showed some form of bias, like performing better when checking white male faces for matches. And it won’t change the fact that Clearview’s database is the product of web scraping — something that’s not illegal but definitely questionable. Internet users may agree to share information with others and the sites they use, but no one affirmatively agrees to allow Clearview to scoop up that information and sell it to government agencies. That’s what has resulted in it being sued in a couple of states, and being kicked out of Canada entirely.
Clearview has no product without the unwilling and unaware contributions of millions of internet users. Even if its AI isn’t as terrible as we’re all free to believe it is, it will still just be the bottom feeder in a murky cesspool of facial recognition tech providers. Floating to the top of the NIST’s tank may give it better copy for its marketing materials, but it won’t earn it any respect.