New Report On Predictive Policing Shows How New Tech Is Giving Us Little More Than The Same Old Racism

from the recycling-racism dept

The National Association of Criminal Defense Lawyers has just released an in-depth examination of predictive policing. Titled “Garbage In, Gospel Out,” it details the many ways bad data based on biased policing has been allowed to generate even more bad data, allowing officers to engage in more biased policing but with the blessing of algorithms.

Given that law enforcement in this country can trace itself back to pre- and post-Civil War slave patrols, it’s hardly surprising modern policing — with all of its tech advances — still disproportionately targets people of color. Operating under the assumption that past performance is an indicator of future results, predictive policing programs (and other so-called “intelligence-led” policing efforts) send officers to places they’ve already been several times, creating a self-perpetuating feedback loop that ensures the more often police head to a certain area, the more often police will head to a certain area.

As the report [PDF] points out, predictive policing is inadvertently accurately named. It doesn’t predict where crime will happen. It only predicts how police will behave.

If crime data is to be understood as a “by-product of police activity,” then any predictive algorithms trained on this data would be predicting future policing, not future crime. Neighborhoods that have been disproportionately targeted by law enforcement in the past will be overrepresented in a crime dataset, and officers will become increasingly likely to patrol these same areas in order to “observe new criminal acts that confirm their prior beliefs regarding the distributions of criminal activity.” As the algorithm becomes increasingly confident that these locations are most likely to experience further criminal activity, the volume of arrests in these areas will continue to rise, fueling a never-ending cycle of distorted enforcement.

This loop bakes racism into the algorithm, “tech-washing” (as the NACDL puts it) the data to give it the veneer of objectivity. The more this happens, the worse it gets. Neighborhoods become “high crime” areas and police respond accordingly, having convinced themselves that looking busy is preferable to fighting crime. Millions of tax dollars are spent creating these destructive loops — a perverse situation that asks taxpayers to fund their own misery.

Once an area is determined to be worthy of constant police presence, those living in these areas can expect to have their rights and liberties curtailed. And courts have sometimes agreed with these assessments, allowing officers to treat entire neighborhoods as inherently suspicious, allowing them to engage in searches and questioning of people who just happened to be in the “wrong place” at literally any time. And that’s unlikely to improve until courts start asking tough questions about predictive policing programs.

Data-driven policing raises serious questions for a Fourth Amendment analysis. Prior to initiating an investigative stop, law enforcement typically must have either reasonable suspicion or probable cause. Does a person loitering on a corner in an identified “hotspot” translate to reasonable suspicion? What if that person was identified by an algorithm as a gang member or someone likely to be involved in drug dealing or gun violence? Can an algorithm alone ever satisfy the probable cause or reasonable suspicion requirement? The lack of transparency and clarity on the role that predictive algorithms play in supporting reasonable suspicion determinations could make it nearly impossible to surface a Fourth Amendment challenge while replicating historic patterns of over-policing.

Rights abridged before and after, all in the name of “smarter” policing which greatly resembles more analog methods like “broken windows policing” or numerous stop-and-frisk programs that allowed officers to stop and search nearly anyone for nearly no reason. The only difference is how much is being spent and how likely it is that cops and their oversight will believe it’s “smarter” just because it’s attached to thousands of dollars of computer equipment.

There’s more to the report than this. This barely touches the surface. There are numerous problems with these systems, including the fact they’re proprietary, which means the companies behind them won’t allow their software to be examined by defendants and the programs themselves are rarely subject to oversight by either the departments using them or the city governments presiding over these police departments.

Data-driven policing is faulty because it relies on faulty data. It’s as simple as that. Here are just a few examples of how “smarter” policing is actively harming the communities it’s deployed in against.

In response to such advances in crime-mapping technologies, researchers have discovered that the underlying mathematical models are susceptible to “runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the actual crime rate” as a byproduct of biased police data…

Bad data also infects other efforts by police departments, like potential sources of useful intel like gang databases, which have been allowed to become landfills for garbage inputs.

For example, CalGang, a database widely used in California, listed 42 infants under the age of 1 as active gang members. Moreover, because there is “no clear, consistent and transparent exit process” for those on the database, it can be assumed that a significant proportion of “gang” designees were added in their teens and preteens. The Chicago Police Department (CPD)’s database includes more than 7,700 people who were added to the database before they turned 18, including 52 children who were only 11 or 12 years old at the time of their inclusion. An investigation published by The Intercept identified hundreds of children between the ages of 13 and 16 listed in the New York Police Department (NYPD)’s gang database in 2018.

The programs have proven so useless in some cases that cities that have long relied on predictive policing programs are dumping them.

The SSL program was “dumped” by the CPD [Chicago PD] in 2020 after a report published by the City of Chicago’s Office of the Inspector General (OIG) concluded that the SSL had not been effective in reducing violence, and that “of the 398,684 individuals recorded in one version of the model, only 16.3 percent were confirmed to be members of gangs.” In her meeting with the Task Force, Jessica Saunders, formerly a researcher at the RAND Corporation, additionally noted that there was no evidence that any person-based predictive policing strategies like the SSL had proven “effective” by any metrics.

The biggest lie in all of this isn’t how it’s portrayed to outsiders. It’s the lie law enforcement agencies tell themselves: that data-driven policing is better and smarter than the way they used to do things. But it’s just the same things they’ve always done. The tech doesn’t give them an edge. It just confirms their biases.

As legal scholar Elizabeth Joh noted in her conversation with the Task Force, the discussion surrounding big data policing programs often assumes that the police are the consumers, or the “end users,” of big data, when they themselves are generating much of the information upon which big data programs rely from the start. Prior to being fed into a predictive policing algorithm, crime data must first be “observed, noticed, acted upon, collected, categorized, and recorded” by the police. Therefore, “every action – or refusal to act – on the part of a police officer, and every similar decision made by a police department, is also a decision about how and whether to generate data.”

Data-driven policing is pretty much indistinguishable from non-data-driven policing. The only difference is how much is being spent on useless tech and what police officers and supervisors are telling themselves to maintain the illusion that biased policing can actually increase public safety and reduce crime.

Filed Under: , ,
Companies: nacdl

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “New Report On Predictive Policing Shows How New Tech Is Giving Us Little More Than The Same Old Racism”

Subscribe: RSS Leave a comment
15 Comments
Ninja (profile) says:

Structural racism is deep ingrained in society and I’m not talking about the US alone. It starts by screwing up black children by denying them the same opportunities white kids have going up to screwing them by paying lower wages for the same positions or flat out denying them employment and forcing them towards crime as means of surviving. Then it further screws them with a biased judicial system that’s rigged into piling up accusations while denying means of defending themselves which lands them in a completely dysfunctional prison system with punishment as its only goal. And good luck if they decide to be good persons, follow the law after serving prison time, innocent or guilty, because nobody is gonna employ them or they’ll be subject to atrocious labor conditions and wages.

I’m describing far too many countries. Civilization my ass.

Anonymous Coward says:

Re: Re:

Structural racism is deep ingrained in society and I’m not talking about the US alone. It starts by screwing up black children

If you accept that children can be factually categorized as "black", you’re part of the problem. The (largely but not exclusively) American concept of "race" is pseudoscience—an invented system of oppression disguised as genetic science. It’s like in India, where it’s illegal to discriminate by caste, except where that’s mandatory in the interest of helping out the disadvantaged castes. You can’t tell children they’re advantaged or disadvantaged by virtue of skin color (or whatever else), and expect that not to "screw them up" in some way. American children probably wouldn’t treat skin color as anything more important that eye color or shoe size if not for what they learn from adults.

Ninja says:

Re: Re: Re:

By black children I mean minorities children.
"You can’t tell children they’re advantaged or disadvantaged by virtue of skin color (or whatever else), and expect that not to "screw them up" in some way. American children probably wouldn’t treat skin color as anything more important that eye color or shoe size if not for what they learn from adults." <<< no, the kids are taught about bigotry and prejudice in general but the system was built by bigoted people in a way that screws minorities since birth.

This comment has been flagged by the community. Click here to show it.

restless94110 (profile) says:

Stating the obvious

Uh, it’s not racist to realize reality: blacks are much more violent than other races. AI and programming data quickly realize this. Idiots call that racist. Obviously it’s just the truth. Machines don’t lie. They ain’t racist.

Those that deny the reality of black violence, like you, are the true racists. You are anti-white racists, and I’m betting you are white, It’s a mental illness in many white people.

They call frank reporting and studies as racists because they show the truth, Time for you to get back to the science.

Anonymous Coward says:

Re: Stating the obvious

Uh, it’s not racist to realize reality: blacks are much more violent than other races. AI and programming data quickly realize this.

If they are subjected to most of the police violence, they will appear more violent in police data sets, although they are the victims of violence and not the perpetrators.

bhull242 (profile) says:

Re: Stating the obvious

Uh, it’s not racist to realize reality: blacks are much more violent than other races.

I don’t think you know what “racism” means, because that statement right there is racist in itself, and it only gets worse in context.

AI and programming data quickly realize this. Idiots call that racist. Obviously it’s just the truth. Machines don’t lie. They ain’t racist.

Here’s the thing: a machine can only work with the data you put into it. If the cops are plugging in data based on their racist views and unequal enforcement, the computer can’t just magically remove the racism from the data. Garbage in, garbage out.

Those that deny the reality of black violence, like you, are the true racists.

No one is saying black violence doesn’t happen. Only an idiot would say that. Saying that blacks are more likely to be violent after controlling for other factors like economic status is the problem here.

You are anti-white racists, and I’m betting you are white, […]

Ummmm… Do you not realize that that assumption about their skin color is also racist? Also, it’s pretty typical for bigots to say, “I’m not a bigot! You’re a bigot!” What are you, 12?

It’s a mental illness in many white people.

And now you show you don’t know what mental illness is on top of not understanding what racism is. Are you going to go for the hat trick?

They call frank reporting and studies as racists because they show the truth, […]

Uh, no. For one thing, most studies don’t show that blacks are more likely to commit violence, so that’s just false. Most of the ones that did show such a connection either failed to consider confounding factors (like poverty), had small sample sizes or selection bias, or had other problems.

Time for you to get back to the science.

And we’ve got the hat trick! Yeah, based on the context, it’s clear that you don’t understand what science is, either.

Add Your Comment

Your email address will not be published.

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...