from the taking-out-the-AI-trash dept
Clearview may as well exit Europe entirely. Things are not going to get better for it. Online privacy laws are far more restrictive on the other side of the pond and Clearview’s business model will always be in violation of those laws.
European laws require companies to obtain some sort of consent from the people whose data they gather. Clearview doesn’t ask for anyone’s consent. It scrapes publicly available websites of any photos and personal data it can and sells access to this database and its facial recognition AI to pretty much anyone who wants it.
That has resulted in a string of Eurocentric losses for Clearview. Canada may not be Europe but it still is subservient to the United Kingdom. It booted Clearview from the country last February, saying its web-scraping habits violated local laws.
Nine months later, it was ejected by Australia (another UK subcontractor), again for violating privacy laws. France did the same thing one month later, citing violations of the GDPR. Four months later, it was Italy telling the company to leave the country and attaching a $21 million fine to its demand.
Roughly about the same time as the French rejection, UK regulators threatened Clearview with a $23 million fine. The reason? The same as everywhere else: privacy law violations. This threat arrived 18 months after Clearview started doing business in the UK, offering its services to law enforcement, private equity firms, the Ministry of Defence, and (oddly) a charity headed by author J.K. Rowling.
The threat is now a reality, although the ask appears to have decreased a bit. Here’s James Vincent with the details for The Verge.
Controversial facial recognition company Clearview AI has been ordered to delete all data belonging to UK residents by the country’s privacy watchdog, the Information Commissioner’s Office (ICO). The ICO also fined Clearview £7.5 million ($9.4 million) for failing to follow the UK’s data protection laws.
The breaches are severe, as the ICO notes in its press release:
The ICO found that Clearview AI Inc breached UK data protection laws by:failing to use the information of people in the UK in a way that is fair and transparent, given that individuals are not made aware or would not reasonably expect their personal data to be used in this way;
– failing to have a lawful reason for collecting people’s information;
– failing to have a process in place to stop the data being retained indefinitely;
– failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR);
– asking for additional personal information, including photos, when asked by members of the public if they are on their database. This may have acted as a disincentive to individuals who wish to object to their data being collected and used.
This is coupled with yet another demand to delete all data pertaining to those affected by this decision. Considering Clearview’s lawsuit losses and ejections from multiple countries, it should be getting better at locating affected data and deleting it from its database. Then again, it may not actually be doing what’s been requested because (1) no one but Clearview has access to its database, and (2) the company feels it’s beyond the reach of foreign laws.
Clearview may never decide to stop being the worst participant in the crowed facial recognition marketplace, but sooner or later, it’s going to have trouble turning a profit. What’s happened elsewhere in the world is going to continue happening. The GDPR simply does not allow the sort of data gathering Clearview engages in.
And, while US laws are far more permissive, it’s still going to find itself the target of irate citizens, pissed off legislators, state prosecutors, and US congressional reps. It may be able to find willing customers in the United States — many of which are federal entities — but sooner or later, this gravy train ride is going to end because there are plenty of other, more ethical competitors to choose from.