Cities Looking To Dump ShotSpotter Since It's Barely More Useful Than Doing Nothing At All
from the drop-it-like-it's-shot dept
Tech that supposedly detects gunshots has been deployed in multiple cities across the nation with the intent of providing faster response times to possible violence and to give investigators heads up where illegal activity may have occurred. The tech has some pretty serious problems, though.
For one, it cannot reliably detect gunshots.
A 2013 investigation of the effectiveness of ShotSpotter in Newark, New Jersey revealed that from 2010 to 2013, the system’s sensors alerted police 3,632 times, but only led to 17 actual arrests. According to the investigation, 75% of the gunshot alerts were false alarms.
The AI can “hear” the percussive noise and attempt to determine whether or not it’s an actual gunshot. It could be a backfire or fireworks or some other noise distinguishably louder than the ambient noise at the detector’s location. Far too often (and far more often than ShotSpotter claims), the guesses are wrong.
Out of Fall River’s 51 ShotSpotter activations in 2017, 21 have been false alarms, a 41 percent error rate. The sensors often report loud noises such as car backfires and fireworks as gunshots.
Dupere said there have been another 15 ShotSpotter activations this year that police later determined were unfounded because the responding officers found no evidence of gunfire or any witnesses to corroborate that they had seen or heard gunshots.
More disturbingly, the determinations made by the software can apparently be overridden if investigators truly want a shot to be spotted by tech like that offered by ShotSpotter. ShotSpotter personnel have altered AI judgment calls to fit police narratives. Worse, they’ve also moved detected shots from one location to another to better fit law enforcement’s theory about who was involved in a shooting and where it happened.
ShotSpotter is an investigative tool, but it’s a particularly malleable one. Investigators and officers seem pleased ShotSpotter personnel are willing to alter records to better suit their theories and narratives. But that’s completely at odds with the ideal of evidence introduced in criminal cases, which is supposed to be free of bias and deliberate manipulation. When (criminal) history can be rewritten on the fly, the facts of the case are no longer factual.
These problems are exaggerated when law enforcement assumes any noise originating from ShotSpotter detectors is a gunshot and sends officers in to harass the locals until something turns up. Like almost every other tech advancement deployed by cops (especially “predictive policing,” but there are others), the deployed tech tends to serve pre-existing biases held for decades by the law enforcement community: that minorities are inherently more “dangerous” than the white folks that tend to be overrepresented in law enforcement agencies.
Cities and police departments are loath to disclose the locations of their ShotSpotter sensors, but through public records requests Motherboard also obtained years of data from Kansas City, Missouri; Cleveland, Ohio; and Atlanta, Georgia showing where ShotSpotter sensors generated alerts—a proxy for the general location of the sensors.
In all four cities, the data shows that the sensors are also placed almost exclusively in majority Black and brown neighborhoods, based on population data from the U.S. Census.
So, there’s the bigotry angle. This is on top of the evidence-faking issues. And then there are the limitation of the tech, which makes it an unwise investment for cities that actually want to do something about violent crime, rather than just let cops engage in biased policing.
The San Diego City Council was scheduled to vote July 27 on a $1.15 million renewal of its ShotSpotter contract, but city officials withdrew the item from the agenda after dozens of residents submitted public comments opposing the contract.
“ShotSpotter is a perfect example of the patronizing approach to public safety in Black and brown neighborhoods—this idea that we know what’s best for Black and brown community members,” Khalid Alexander, president of Pillars of the Community, told Motherboard. Pillars is an organization that opposes the over-policing of Black and brown residents, and is based in San Diego’s District 4—the only area of the city where ShotSpotter is located.
“I haven’t heard one person in District 4 … come out and say that the ShotSpotter technology is something needed to keep us safe,” Alexander said.
Unsurprisingly, ShotSpotter says the opposite:
In a statement emailed to Motherboard, ShotSpotter claimed that local communities have told them the opposite, and that it would be “false and misleading” to describe the activists as representing the larger population. “As the communities who suffer most from gun violence, they recognize and appreciate ShotSpotter’s role helping police departments combat gun violence,” the company wrote.
While there’s certainly some confirmation bias in play here, it’s safe to say it’s more than just a land of contrasts out there. But the only party that really has any money at stake is ShotSpotter, and self-preservation is a far stronger motivator than simply feeling underserved by law enforcement agencies that have decided to outsource serving and protecting to passive sensors and artificial intelligence. Taxpayer frustration may touch on fiscal issues, but since it’s an aggregate fund, unhappy citizens tend to be less motivated by their bottom lines than companies with multi-million dollar contracts on the line.
And maybe $1.15 million/year seems like a drop in the bucket when it comes to police department budgets. Residents are sick of ShotSpotter and its ability to send officers expecting violence flooding into communities with guns akimbo. In Chicago, the contract now in danger of going unrenewed is $33 million, which is a very tangible hit to the bottom line of a company that early last year recorded record quarterly revenue of a little over $10 million.
Why keep paying for something that isn’t doing anything much more than allowing alarmists to claim that violence is on the rise and the only cure is more cop violence? San Diego might not be paying much for its limited rollout of ShotSpotter, but even its small sample set is a complete disappointment.
A San Diego Police Department spokesperson told Voice of San Diego that during the four years ShotSpotter had been in use (as of September 2020) officers had made only two arrests responding to an alert and only one of those was directly linked to the alert.
Meanwhile, 72 of the 584 ShotSpotter alerts during that time period were determined to be unfounded, “a whopping 25 times higher than the 0.5 percent false positive rate put forth by the company,” the Voice of San Diego reported, based on data provided by the city’s police department.
Thirty-six times more false positives than useful investigative leads. $4.60 million (over four years at the going rate) should buy more help than that. It is only two arrests better than having nothing at all, which is so far below rounding error as to be laughable.
Even if the tech improves as time goes on, we still need to ask why we even need it. So far, all it’s done is make ShotSpotter more money and justify biased policing efforts by law enforcement agencies willing to make use of any new development that allows them to keep doing the things they’ve always done. Taxpayers shouldn’t be asked to fund programs that set cops up for failure. And they certainly shouldn’t be expected to pay for the dubious privilege of being subjected to a system more capable of altering criminal evidence than making neighborhoods safer.