Microsoft Posts List Of Facial Recognition Tech Guidelines It Thinks The Government Should Make Mandatory

from the good-rules,-cloudy-motive dept

Earlier this year, Microsoft faced backlash for appearing to be working with ICE to provide it with facial recognition technology. A January blog post from its Azure Government wing stated it had acquired certification to set up and manage ICE cloud services. The key bit was this paragraph, which definitely made it seem Microsoft was joining ICE in the facial recognition business.

This ATO [Authority to Operate] is a critical next step in enabling ICE to deliver such services as cloud-based identity and access, serving both employees and citizens from applications hosted in the cloud. This can help employees make more informed decisions faster, with Azure Government enabling them to process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification.

Roughly five months later, this blog post was discovered, leading to Microsoft receiving a large dose of social media shaming. A number of its own employees signed a letter opposing any involvement at all with ICE. A July blog post from the president of Microsoft addressed the fallout from the company's partnership with ICE. It clarified that Microsoft was not actually providing facial recognition tech to the agency and laid out a number of ground rules the company felt would best serve everyone going forward.

This starting point has now morphed into a full-fledged rule set Microsoft will apparently be applying to itself. Microsoft's Brad Smith again addresses the positives and negatives of facial recognition tech, especially when it's deployed by government agencies. The blog post is a call for government regulation, not just of tech companies offering this technology, but for some internal regulation of agencies deploying this technology.

Smith's post is long, thoughtful, and detailed. I encourage you to read it for yourself. But most of it falls under these headings -- all issues Microsoft believes should be addressed via federal legislation.

First, especially in its current state of development, certain uses of facial recognition technology increase the risk of decisions and, more generally, outcomes that are biased and, in some cases, in violation of laws prohibiting discrimination.

Second, the widespread use of this technology can lead to new intrusions into people’s privacy.

And third, the use of facial recognition technology by a government for mass surveillance can encroach on democratic freedoms.

The three points affect everyone involved: the government, facial recognition tech developers, and private sector end users. It asks the government to police itself, as well as any vendors it deals with. It's a big ask, especially since the government has historically shown minimal restraint when exploiting new surveillance technology. It often falls on the nation's courts to regulate the government's tech use, rather than the government being proactively cautious when rolling out new tools and toys.

But it also demands a lot from the private sector and suggests those who can't follow these rules Microsoft has laid out shouldn't be allowed to offer their services to the government. Here's what Smith proposes as a baseline for the tech side:

Legislation should require tech companies that offer facial recognition services to provide documentation that explains the capabilities and limitations of the technology in terms that customers and consumers can understand.

New laws should also require that providers of commercial facial recognition services enable third parties engaged in independent testing to conduct and publish reasonable tests of their facial recognition services for accuracy and unfair bias.

[...]

While human beings of course are not immune to errors or biases, we believe that in certain high-stakes scenarios, it’s critical for qualified people to review facial recognition results and make key decisions rather than simply turn them over to computers. New legislation should therefore require that entities that deploy facial recognition undertake meaningful human review of facial recognition results prior to making final decisions for what the law deems to be “consequential use cases” that affect consumers. [...]

Finally, it’s important for the entities that deploy facial recognition services to recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers. This provides additional reason to ensure that humans undertake meaningful review, given their ongoing and ultimate accountability under the law for decisions that are based on the use of facial recognition.

This is the burden on the tech side. What the government needs to do is just not use it for mass surveillance or the continuous surveillance of certain people. Microsoft suggests warrants for continuous surveillance using facial recognition tech with the expected exceptions for emergencies and public safety risks.

When combined with ubiquitous cameras and massive computing power and storage in the cloud, a government could use facial recognition technology to enable continuous surveillance of specific individuals. It could follow anyone anywhere, or for that matter, everyone everywhere. It could do this at any time or even all the time. This use of facial recognition technology could unleash mass surveillance on an unprecedented scale.

[...]

We must ensure that the year 2024 doesn’t look like a page from the novel “1984.” An indispensable democratic principle has always been the tenet that no government is above the law. Today this requires that we ensure that governmental use of facial recognition technology remain subject to the rule of law. New legislation can put us on this path.

It's all good stuff that would protect citizens and curb abusive tech deployment if implemented across the board by tech companies. But that would likely require a legislative mandate, according to Microsoft. The end result is Microsoft asking the same entity its feels may abuse the tech to lay down federal guidelines for development and deployment.

I don't have any complaints about what Microsoft's proposing. I only question why it's proposing it. When a large corporation starts asking for government regulation, it's usually because increased regulation would keep the market smaller and help Microsoft weed out a few possible competitors. I wouldn't say this is the only reason Microsoft is handing out a long wish list of government mandates, but there's no way this isn't a factor.

Microsoft's management likely has genuine concerns about this tech and its future uses. Somewhat coincidentally, it's also in the best position to make these arguments. Other than a supposed misunderstanding about selling facial recognition tech to ICE, the company hasn't set its reputation on fire and/or been caught handing the government loads of tools that can be repurposed for oppression.

Other players in the facial recognition market have already ceded the high ground. Amazon has been handing out tech to law enforcement agencies even as Congress members are demanding answers from the company about its facial recognition software. Google may not be pushing facial recognition tech, but with it currently engaged in building an oppressor-friendly search engine for China's government, it can't really portray itself as a champion of civil liberties. Facebook has used facial recognition tech for years, but is currently so toxic no one really wants to hear what it has to say about privacy or government surveillance. Apple may have some guidance to offer, but the DOJ likely uses Tim Cook headshots for dartboards, making it less than receptive to the company's thoughts on biometric scanning. As for the rest of the players in the field -- the multiple contractors who sell surveillance equipment to the governments all over the world -- they have zero concerns about government abuse or respecting civil liberties, so Microsoft's post may as well be written in Etruscan for all they'll get out it.

I'm in firm agreement with Brad Smith/Microsoft that facial recognition tech is a threat to privacy and civil liberties. I also believe the companies crafting/selling this tech should vet their products thoroughly and be prepared to shut them down if they can't eliminate bias or if products are being used to conduct pervasive, unjustified surveillance. I don't believe most tech companies will do this voluntarily and know for a fact the government will not actively police use of these systems. The status quo -- zero accountability from governments and government contractors -- cannot remain in place. The courts may right some wrongs eventually, but until then, suppliers of facial recognition technology are complicit in the resulting civil liberties violations.

I applaud Microsoft for calling for action. But I will hold that applause until it becomes apparent Microsoft will maintain these standards internally, with or without a legislative mandate. If other companies choose to sign on as… I don't know… ethical surveillance tech dealers, that would be great. Asking the government to regulate tech development the preferred course of action, but a surveillance tech Wild West isn't an ideal outcome either. Ideally, the government would set higher standards for adoption and deployment of tech along the lines Microsoft has proposed, policing itself by vetting its vendors better. But if the federal government was truly interested in limiting its abuse of tech developments, we would have seen some evidence of it already.

These suggestions should be voluntarily adopted by other tech companies, if for no other reason than it insulates them from elimination should the government decide its going to up its acquisition and deployment standards. Microsoft scores a PR win, if nothing else, simply by being first. I appreciate its staking out its stance on this issue, but remain cautiously pessimistic about the company's ability to live up to its own standards.

Filed Under: face recognition, regulations
Companies: microsoft


Reader Comments

The First Word

Subscribe: RSS

View by: Time | Thread


  • icon
    DannyB (profile), 11 Dec 2018 @ 6:06am

    Microsoft would never collaborate with government

    It is inconceivable (NSAKEY) that Microsoft would ever collaborate with the government to spy on everyone.

    reply to this | link to this | view in chronology ]

    • icon
      NoahVail (profile), 11 Dec 2018 @ 6:24am

      Re: Microsoft would never collaborate with government

      Adding NYPD's years-old Domain Awareness System - built/maintained/expanded by Microsoft, which includes Facial Recognition tech.

      reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 11 Dec 2018 @ 9:32am

        Re: Re: Microsoft would never collaborate with government

        When ICE and NYPD want to renew their contracts, I hope MS will require them to comply with these new rules. If they don't want to, kick them off and let them find someone else; even adding some friction to surveillance is worthwhile.

        reply to this | link to this | view in chronology ]

  • icon
    Anonymous Anonymous Coward (profile), 11 Dec 2018 @ 7:21am

    Fines that work, make it personal

    Where is the call for severe penalties, both to the government entity that is using it, and the companies that produce the systems for each and every false positive?

    On the government side, garnish the salaries of the people that took the system provided error and failed to properly verify it before acting (let them apply for food stamps), as well as the operating budget of the department. On the provider side, something that would shock the consciousness of each and every shareholder.

    reply to this | link to this | view in chronology ]

    • icon
      OldMugwump (profile), 11 Dec 2018 @ 9:01am

      Re: Fines that work, make it personal

      If you demand zero false positives, you won't have any system at all.

      Even humans make false positive identification mistakes.

      The key is to get the users of the system to understand that the system has false positives, and to take that into consideration when making decisions.

      Just as they are supposed to do now.

      reply to this | link to this | view in chronology ]

      • icon
        NoahVail (profile), 11 Dec 2018 @ 9:04am

        Re: Re: Fines that work, make it personal

        "If you demand zero false positives, you won't have any system at all."

        Works for me.

        reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 11 Dec 2018 @ 9:46am

        Re: Re: Fines that work, make it personal

        Lets treat false positive id matches like copyright violations. The software provider can be fined for every false positive not immediately identified and annulled in the system and the users can be fined for acting on a false positive. It doesn't matter whether they're aware of the false positives.

        Oh, and no filtering should be needed/used.

        reply to this | link to this | view in chronology ]

        • icon
          Anonymous Anonymous Coward (profile), 11 Dec 2018 @ 11:55am

          Re: Re: Re: Fines that work, make it personal

          That is an important point. Any positive should be verified by a human, and then due consideration taken before any action is taken. That due consideration might be some additional investigative work to make sure the 'identified' individual is actually who you are looking for.

          Systems will likely have error, but if sufficient due diligence is applied those errors should be minimized, and the 'then' person making that error held accountable. If the due diligence causes too much wasted work on the users part, they will probably take care of the manufacturers themselves, though I would still be for making the manufacturers accountable for the accuracy rates they claim at the time of sale.

          reply to this | link to this | view in chronology ]

          • identicon
            Dr-RJP, 12 Jan 2019 @ 3:41pm

            And what do you do about false negatives?

            Here's a hypothetical:

            Let's say that ICE wants to use Microsoft's new FRS to identify a terrorist who was seen embedded in a Honduran caravan crossing the border. However, because of complaints of "profiling" and "racism" by SJW's, immigration activists, pro-Islamic groups and assorted liberal lunatics, the FCS algorithms have been so watered down that they miss Joe Terrorist before he makes his way to California where he hijacks a semi and uses it to mow down school kids waiting to board school buses.

            Who do you think will be the FIRST person, group, agency or company called out for not apprehending the terrorist in time to prevent the tragedy?

            HINT: It won't be Microsoft.

            reply to this | link to this | view in chronology ]

      • identicon
        Anonymous Coward, 11 Dec 2018 @ 11:00am

        Re: Re: Fines that work, make it personal

        If a system does not function properly, how can that system be trusted to perform any sort of operation?

        I define "properly" as no false positives because how can anyone justify anything else? Collateral damage is ok? When it is not you - amirite?

        reply to this | link to this | view in chronology ]

  • icon
    Mason Wheeler (profile), 11 Dec 2018 @ 8:02am

    We must ensure that the year 2024 doesn’t look like a page from the novel “1984.”

    The further we go, the more it ends up looking like we're on course for Jennifer Government instead...

    reply to this | link to this | view in chronology ]

  • identicon
    GoodForThem, 11 Dec 2018 @ 10:05am

    OpenStandards

    Sounds like MSFT might be open to... well open standards.

    The next step for MSFT is to put funding into a non-profit like EFF which can mark up the language for public and congressional review.

    My concern with facial recognition is focused on how we live our lives and what we do. With facial recognition out in the open we are the SAME as China using social media scores to allow or disallow our personal activities. Facial recognition at an airport security screening or entering a courthouse I have no problem with, Microsoft's own facial recognition to act as a password to unlock my personal Windows computer I am fine with... but anything else GTFO!

    reply to this | link to this | view in chronology ]

    • icon
      Anonymous Monkey (profile), 12 Dec 2018 @ 4:11pm

      Re: OpenStandards

      Microsoft's own facial recognition to act as a password to unlock my personal Windows computer I am fine with...

      I'd rather it be used as my identifier (ie username ) not a password. Saves alot of trouble with all sorts of things.

      reply to this | link to this | view in chronology ]

  • identicon
    Slow Joe Crow, 11 Dec 2018 @ 12:23pm

    Time for countermeasures

    My immediate response was to look up Juggalo makeup since it's very effective at thwarting facial recognition. Unfortunately the Juggalo are a "hybrid gang" according to the FBI, so maybe a Guy Fawkes mask is the answer.

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Close

Add A Reply

Have a Techdirt Account? Sign in now. Want one? Register here



Subscribe to the Techdirt Daily newsletter




Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Techdirt Gear
Shop Now: Copying Is Not Theft
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.