from the good-rules,-cloudy-motive dept
Earlier this year, Microsoft faced backlash for appearing to be working with ICE to provide it with facial recognition technology. A January blog post from its Azure Government wing stated it had acquired certification to set up and manage ICE cloud services. The key bit was this paragraph, which definitely made it seem Microsoft was joining ICE in the facial recognition business.
This ATO [Authority to Operate] is a critical next step in enabling ICE to deliver such services as cloud-based identity and access, serving both employees and citizens from applications hosted in the cloud. This can help employees make more informed decisions faster, with Azure Government enabling them to process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification.
Roughly five months later, this blog post was discovered, leading to Microsoft receiving a large dose of social media shaming. A number of its own employees signed a letter opposing any involvement at all with ICE. A July blog post from the president of Microsoft addressed the fallout from the company’s partnership with ICE. It clarified that Microsoft was not actually providing facial recognition tech to the agency and laid out a number of ground rules the company felt would best serve everyone going forward.
This starting point has now morphed into a full-fledged rule set Microsoft will apparently be applying to itself. Microsoft’s Brad Smith again addresses the positives and negatives of facial recognition tech, especially when it’s deployed by government agencies. The blog post is a call for government regulation, not just of tech companies offering this technology, but for some internal regulation of agencies deploying this technology.
Smith’s post is long, thoughtful, and detailed. I encourage you to read it for yourself. But most of it falls under these headings — all issues Microsoft believes should be addressed via federal legislation.
First, especially in its current state of development, certain uses of facial recognition technology increase the risk of decisions and, more generally, outcomes that are biased and, in some cases, in violation of laws prohibiting discrimination.
Second, the widespread use of this technology can lead to new intrusions into people’s privacy.
And third, the use of facial recognition technology by a government for mass surveillance can encroach on democratic freedoms.
The three points affect everyone involved: the government, facial recognition tech developers, and private sector end users. It asks the government to police itself, as well as any vendors it deals with. It’s a big ask, especially since the government has historically shown minimal restraint when exploiting new surveillance technology. It often falls on the nation’s courts to regulate the government’s tech use, rather than the government being proactively cautious when rolling out new tools and toys.
But it also demands a lot from the private sector and suggests those who can’t follow these rules Microsoft has laid out shouldn’t be allowed to offer their services to the government. Here’s what Smith proposes as a baseline for the tech side:
Legislation should require tech companies that offer facial recognition services to provide documentation that explains the capabilities and limitations of the technology in terms that customers and consumers can understand.
New laws should also require that providers of commercial facial recognition services enable third parties engaged in independent testing to conduct and publish reasonable tests of their facial recognition services for accuracy and unfair bias.
While human beings of course are not immune to errors or biases, we believe that in certain high-stakes scenarios, it’s critical for qualified people to review facial recognition results and make key decisions rather than simply turn them over to computers. New legislation should therefore require that entities that deploy facial recognition undertake meaningful human review of facial recognition results prior to making final decisions for what the law deems to be “consequential use cases” that affect consumers. […]
Finally, it’s important for the entities that deploy facial recognition services to recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers. This provides additional reason to ensure that humans undertake meaningful review, given their ongoing and ultimate accountability under the law for decisions that are based on the use of facial recognition.
This is the burden on the tech side. What the government needs to do is just not use it for mass surveillance or the continuous surveillance of certain people. Microsoft suggests warrants for continuous surveillance using facial recognition tech with the expected exceptions for emergencies and public safety risks.
When combined with ubiquitous cameras and massive computing power and storage in the cloud, a government could use facial recognition technology to enable continuous surveillance of specific individuals. It could follow anyone anywhere, or for that matter, everyone everywhere. It could do this at any time or even all the time. This use of facial recognition technology could unleash mass surveillance on an unprecedented scale.
We must ensure that the year 2024 doesn’t look like a page from the novel “1984.” An indispensable democratic principle has always been the tenet that no government is above the law. Today this requires that we ensure that governmental use of facial recognition technology remain subject to the rule of law. New legislation can put us on this path.
It’s all good stuff that would protect citizens and curb abusive tech deployment if implemented across the board by tech companies. But that would likely require a legislative mandate, according to Microsoft. The end result is Microsoft asking the same entity its feels may abuse the tech to lay down federal guidelines for development and deployment.
I don’t have any complaints about what Microsoft’s proposing. I only question why it’s proposing it. When a large corporation starts asking for government regulation, it’s usually because increased regulation would keep the market smaller and help Microsoft weed out a few possible competitors. I wouldn’t say this is the only reason Microsoft is handing out a long wish list of government mandates, but there’s no way this isn’t a factor.
Microsoft’s management likely has genuine concerns about this tech and its future uses. Somewhat coincidentally, it’s also in the best position to make these arguments. Other than a supposed misunderstanding about selling facial recognition tech to ICE, the company hasn’t set its reputation on fire and/or been caught handing the government loads of tools that can be repurposed for oppression.
Other players in the facial recognition market have already ceded the high ground. Amazon has been handing out tech to law enforcement agencies even as Congress members are demanding answers from the company about its facial recognition software. Google may not be pushing facial recognition tech, but with it currently engaged in building an oppressor-friendly search engine for China’s government, it can’t really portray itself as a champion of civil liberties. Facebook has used facial recognition tech for years, but is currently so toxic no one really wants to hear what it has to say about privacy or government surveillance. Apple may have some guidance to offer, but the DOJ likely uses Tim Cook headshots for dartboards, making it less than receptive to the company’s thoughts on biometric scanning. As for the rest of the players in the field — the multiple contractors who sell surveillance equipment to the governments all over the world — they have zero concerns about government abuse or respecting civil liberties, so Microsoft’s post may as well be written in Etruscan for all they’ll get out it.
I’m in firm agreement with Brad Smith/Microsoft that facial recognition tech is a threat to privacy and civil liberties. I also believe the companies crafting/selling this tech should vet their products thoroughly and be prepared to shut them down if they can’t eliminate bias or if products are being used to conduct pervasive, unjustified surveillance. I don’t believe most tech companies will do this voluntarily and know for a fact the government will not actively police use of these systems. The status quo — zero accountability from governments and government contractors — cannot remain in place. The courts may right some wrongs eventually, but until then, suppliers of facial recognition technology are complicit in the resulting civil liberties violations.
I applaud Microsoft for calling for action. But I will hold that applause until it becomes apparent Microsoft will maintain these standards internally, with or without a legislative mandate. If other companies choose to sign on as… I don’t know… ethical surveillance tech dealers, that would be great. Asking the government to regulate tech development the preferred course of action, but a surveillance tech Wild West isn’t an ideal outcome either. Ideally, the government would set higher standards for adoption and deployment of tech along the lines Microsoft has proposed, policing itself by vetting its vendors better. But if the federal government was truly interested in limiting its abuse of tech developments, we would have seen some evidence of it already.
These suggestions should be voluntarily adopted by other tech companies, if for no other reason than it insulates them from elimination should the government decide its going to up its acquisition and deployment standards. Microsoft scores a PR win, if nothing else, simply by being first. I appreciate its staking out its stance on this issue, but remain cautiously pessimistic about the company’s ability to live up to its own standards.
Filed Under: face recognition, regulations