Surprising, But Important: Facebook Sorta Shuts Down Its Face Recognition System

from the good-to-see dept

A month ago, I highlighted how Facebook seemed uniquely bad attaking a long term view and publicly committing to doing things that are good for the world, but bad for Facebook in the short run . So it was a bit surprising earlier this week to see Facebook (no I’m not calling it Meta, stop it) announce that it was shutting down its Face Recognition system and (importantly) deleting over a billion “face prints” that it had stored.

The company’s announcement on this was (surprisingly!) open about the various trade-offs here, both societally and for Facebook, though (somewhat amusingly) throughout the announcement Facebook repeatedly highlights the supposed societal benefits of its facial recognition.

Making this change required careful consideration, because we have seen a number of places where face recognition can be highly valued by people using platforms. For example, our award-winning automatic alt text system, that uses advanced AI to generate descriptions of images for people who are blind and visually impaired, uses the Face Recognition system to tell them when they or one of their friends is in an image.

[….]

But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole. There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.

One interesting tidbit buried in this is that only about 1/3 of Facebook users opted in to use Facebook’s facial recognition tool (despite the company pushing it heavily on users). At the very least, it showed that a large number of users weren’t comfortable with the technology.

There’s also the issue that, while they’re turning off the tool and deleting the facial prints, the NY Times notes they’re hanging on to the algorithm that was built on all those faces:

Although Facebook plans to delete more than one billion facial recognition templates, which are digital scans of facial features, by December, it will not eliminate the software that powers the system, which is an advanced algorithm called DeepFace. The company has also not ruled out incorporating facial recognition technology into future products, Mr. Grosse said.

That’s resulted in some (expected) amount of cynicism from Facebook’s critics that Facebook “got what it wanted” and is now moving on. However, I think that’s a bit silly. Facebook could have easily kept the facial recognition program going. Of all the regulatory pressures the company is facing, this was way down the list and barely on the radar.

And, to make a bigger point, here’s a case where the company is actually doing the right thing: turning off a questionable product and deleting a ton of data it collected. And we should at least encourage both Facebook and other companies to be willing to make that decision based on recognizing the societal risks, and without waiting around until they’re forced to do so.

Filed Under: , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Surprising, But Important: Facebook Sorta Shuts Down Its Face Recognition System”

Subscribe: RSS Leave a comment
16 Comments
Koby (profile) says:

Re: Re: Rarely Is It Not About Money

I’m confident that FB didn’t secretly divulge to you the metrics between the folks that did opt in, versus the folks that didn’t. For a corporation that attempts to track everything, and then bases its decisions upon the data, I would find it surprising if FB didn’t conduct a study on its users, or did conduct one which showed that it wasn’t losing engagement yet they decided to abandon its facial recognition program out of the goodness of their hearts. And then privately messaged you with the data just in case someone doubted their sincerity. You don’t have to shill for them so hard Maz, geez!

PaulT (profile) says:

Re: Re: Re: Rarely Is It Not About Money

"I’m confident that FB didn’t secretly divulge to you the metrics between the folks that did opt in, versus the folks that didn’t"

So, you were talking out of your ass when you claimed that over 1 billion people used FB less because of this single issue, because nobody has the data, including you? Gotcha.

Whenever you have evidence to back up your claims, we’re all ears, but until then stop guessing, you’re usually wrong even on the issue where evidence does exist.

PaulT (profile) says:

Re: Rarely Is It Not About Money

I’ll take the bet if you can provide evidence that comes from somewhere other than your rear end. Given your constant inability to deal with reality, especially on the subject of sites that you’ve been bitter about for years since they kicked your klan buddies off their property, I don’t think we should take your guesswork seriously.

Anonymous Coward says:

Faces have too few bits to be psuedounique at scale

I suspect that they found that with large numbers facial recognition had far too many collisions even if they resign themselves to accepting identical twins as an edge-case. If there just isn’t enough in distinguishing bits in a "noisy" environment on real life they would find their data turning to shit for identification purposes. Unlike police forces and the facial recognition merchants everywhere they have no incentive to stick their head in the sand about it.

Now what doesn’t have that issue is facial procedural generation. You don’t need to map it, just generate convincing ones. Once trained they can dump their putrifying dataset.

That is my suspicion anyway, that they decided to use their algorithm for what it turned out to be good for. That has happened several times at Silicon Valley, one attempt of Google at making a street number reader more tolerant of real world distortions didn’t work so well there but broke the letters style Captcha breaker.

Anonymous Coward says:

There’s also the issue that, while they’re turning off the tool and deleting the facial prints, the NY Times notes they’re hanging on to the algorithm

I mean, developers never toss out old code. You never know when you’ll be looking at some other problem, and go "wait, how did we fix that last time… ah, here it is".

Leave a Reply to Mike Masnick Cancel reply

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...