EU Parliament Told Predictive Policing Software Relies On Dirty Data Generated By Corrupt Cops

from the junk-data,-garbage-outcomes dept

Predictive policing efforts continue to expand around the world. Fortunately, so has the criticism. The data witchcraft that expects a bunch of crap data created by biased policing to coalesce into actionable info continues to be touted by those least likely to be negatively affected by it: namely, law enforcement agencies and the government officials that love them.

The theory that people should be treated like criminals because someone else did some crimes in the area in the past is pretty specious, but as long as it results in temporary drops in criminal activity, fans of unreasonable suspicion will continue to use these tools that still have no long-term proven track record.

It's not just a US problem. It's a problem everywhere. The European Parliament has been asking for feedback on predictive policing efforts, which is more than most agencies in the US are willing to do. The Executive Director of the AI Now Institute, Andrea Nill Sanchez, recently testified during a public hearing on the issue, using the Institute's 2019 report on predictive policing to highlight everything that's wrong with turning law enforcement over to modeling algorithms. (via The Next Web)

The point of Sanchez's testimony [PDF] is this: we can't trust the data being fed to these systems. Therefore, we definitely can't trust the predictions being produced by them. Dirty cops create dirty data.

Despite what the term may suggest, predictive policing is neither magic nor a precise science that allows us to see into the future. Instead, predictive policing refers to fallible systems that use algorithms to analyze available data and aim to produce a forecasted probability of where a crime may occur, who might commit it, or who could be a victim.

Left unchecked, the proliferation of predictive policing risks replicating and amplifying patterns of corrupt, illegal, and unethical conduct linked to legacies of discrimination that plague law enforcement agencies across the globe.

This is drawn from AI Now's report, which pointed out some of the most corrupt police forces in the nation were feeding data from biased policing into systems that were now destined to generate nothing but more of the same corruption.

(1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices and recent litigation suggests an extremely high risk that dirty data was or could be used in predictive policing; and (3) Maricopa County, where despite extensive evidence of dirty policing practices, a lack of public transparency about the details of various predictive policing systems restricts a proper assessment of the risks.

This is only one of several problems. First, almost every system used by law enforcement agencies is a closed system, unable to be inspected by anyone outside the companies that sell them and the agencies that use them. There's nearly no way for outsiders to vet data or outcomes. They can only judge the systems by the results. And those results are withheld or released pretty much at the sole discretion of the agencies using predictive policing software. If it's not producing decreases in crime rates, citizens will be the last to know.

Second, the data used is both tainted and selective, skewing the output towards the selective enforcement agencies already engage in.

[P]redictive policing primarily relies on inherently subjective police data, which reflects police practices and policies—not actual crime rates. Law enforcement exercises enormous discretion in how it carries out its work and collects data, including the crimes and criminals it overlooks.

Finally, there's no way to cleanse the systems of dirty data. Police departments are unable to recognize that their own biases might taint inputs. And citizens are powerless to challenge the data that's being used to target them for increased law enforcement scrutiny. With pushback effectively neutralized -- both by police practices and the secrecy surrounding proprietary algorithms -- agencies are just going to continue engaging in the same biased policing, only with the justification that the software told them to do it.

Since life and liberty is on the line for citizens put at the mercy of software being fed junk data, Sanchez suggests the following should be instituted (at a bare minimum) before predictive policing software is deployed:

As a first step, agencies considering using predictive policing tools should undertake Algorithmic Impact Assessments that include the following: (1) a self-assessment evaluating the system’s potential impacts on fairness, justice, and bias; (2) a meaningful external review process; (3) public notice and comment; and (4) enhanced due process mechanisms to challenge unfair, biased, or other harmful effects.

But that's just the start. The whole system has failed for years and reform from the ground up is needed before this software can be trusted in the hands of the police. Sanchez calls for the entire criminal justice system to be reformed, since it's the source of institutional racism and the enabler of some of the worst law enforcement behavior.

The negatives far outweigh the positives when it comes to predictive policing. The EU is taking the right step by asking for public input. Here in the US, public input is still an afterthought -- something that only happens after enough damage has been done. Turning law enforcement over to tainted data and proprietary algorithms poses a genuine threat to life and liberty. Hopefully, someone in the US will take a cue from the EU Parliament's actions and start asking law enforcement agencies tough questions about their predictive policing programs.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: eu, junk data, law enforcement, predictive policing

Reader Comments

Subscribe: RSS

View by: Time | Thread

  1. identicon
    ROGueS, 29 Mar 2020 @ 7:57am

    Re: Re: Re: Re: Re: fee speech?

    nasch, first, understand that the internet has been gameified by hidden internet operatives who work at all levels of CVE programming.

    And its at the point where words online are more like game pieces to many/any who get caught in these word traps that pass for "hate speech/terrorist speech/extremist content," etc.

    Then, have you ever read the texts that your culture refers to and derives from?

    Each and every reader of Bible/Torah/Talmud/Book of Mormon/Quran is by your definitions above, a violent extremist due to the content of those books,and terrorists by the very nature of that material, but we we tolerate the vast majority of them, because they hide their extremism behind police activity (which TD regularly covers in a most excellent way).

    Then, the even more extreme versions of that fanaticism, i.e. fanatical zionism (both Jewish and christian zionists) are responsible for heinous terrorist activity in the FVEYs and beyond. So the question for me is why don't we simply police them and their hidden activity?

    And the answer as we know, is that that the militarization of police is driven by those narratives and activities; and that those exact cockroaches work at ALL levels of high policing.

    Remember, if they have nothing to hide, etc. But they have A LOT to hide, because what they are doing isn't simply redirection.

    In fact, I can and have demonstrated that what they are doing is much more than that, which is roughly called "third party punishment*," and all of the absurdity that you can imagine goes into that.

    Because it is they who are behind nearly every mass shooting in the US, and more, as we see time and time again that close quartering by surveillance role players, working with local police, NGOs, and "community policing" under the ATAP "parallel colliding investigations" model of state sponsored harassment.

    And, they are illegally, and literally obliterating the civil rights of those who they target after speech policing, and bizarre harassment online and off.

    And all of that starts with speech policing the gameified internet, where words are like game pieces on a board, rather than actual "threats" of any kind.

    Is it that hard for you to understand this? Its gameified speech that is being policed.

    And, what is really happening is forced conformity to Jewish-christian zionist ideology and narrative, which is the absolute opposite of simple redirection.

    Its become so prolific that both the left and right are targeting people in this way now.

    There are proven links between the absurdities of CVE "othering" by hidden internet operatives, and mass shootings under the "manufactured terrorism" initiatives, and the rabid extremists who are DVIC and CVE beneficiaries.

    As they like to say "It is bullying on steroids," but what I like to say is "It's bullying on acid," because it attacks a person in the privacy of their Twitter, Facebook, etc., and it is in fact an "influence" operation.

    It is an unasked for attempt to control the way people think-how they process information in their mind via unasked for meddling and middle man attacks on their speech.

    And that redirection by religious and political speech police is different than a mere advertisement by exponentials; and it should be illegal; and what they are doing offline is illegal.

    SO, I developed ROGS Analysis, and the ROGS Bingo card towards that purpose.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Make this the First Word or Last Word. No thanks. (get credits or sign in to see balance)    
  • Remember name/email/url (set a cookie)

Follow Techdirt
Insider Shop - Show Your Support!

Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it

Email This

This feature is only available to registered users. Register or sign in to use it.