from the not-smarter,-better,-or-less-biased.-just-faster. dept
There’s plenty of human work to be done, but there never seems to be enough humans to do it. When things need to be processed in bulk, we turn it over to hardware and software. It isn’t better. It isn’t smarter. It’s just faster.
We can’t ask humans to process massive amounts of data because they just can’t do it well enough or fast enough. But they can write software that can perform tasks like this, allowing humans to do the other things they do best… like make judgment calls and deal with others humans.
Unfortunately, even AI can become mostly human, and not in the sentient, “turn everyone into paperclips” way it’s so often portrayed in science fiction. Instead, it becomes an inadvertent conduit of human bias that can produce the same results as biased humans, only at a much faster pace while being whitewashed with the assumption that ones and zeroes are incapable of being bigoted.
But that’s the way AI works, even when deployed with the best of intentions. Unfortunately, taking innately human jobs and subjecting them to automation tends to make societal problems worse than they already are. Take, for example, a pilot program that debuted in Pennsylvania before spreading to other states. Child welfare officials decided software should do some of the hard thinking about the safety of children. But when the data went in, the usual garbage came out.
According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children.
Fortunately, humans were still involved, which means not everything the AI spit out was treated as child welfare gospel.
The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.
But if the balance shifted towards more reliance on the algorithm, the results would be even worse.
If the tool had acted on its own to screen in a comparable rate of calls, it would have recommended that two-thirds of Black children be investigated, compared with about half of all other children reported, according to another study published last month and co-authored by a researcher who audited the county’s algorithm.
There are other backstops that minimize the potential damage caused by this tool, which the county relies on to handle thousands of neglect decisions a year. Workers are told not to use algorithmic output alone to instigate investigations. As noted above, workers are welcome to disagree with the automated determinations. And this only used to handle cases of potential neglect or substandard living conditions, rather than cases involving more direct harm like physical or sexual abuse.
Allegheny County isn’t an anomaly. More locales are utilizing algorithms to make child welfare decisions. The state of Oregon’s tool is based on the one used in Pennsylvania, but with a few helpful alterations.
Oregon’s Safety at Screening Tool was inspired by the influential Allegheny Family Screening Tool, which is named for the county surrounding Pittsburgh, and is aimed at predicting the risk that children face of winding up in foster care or being investigated in the future. It was first implemented in 2018. Social workers view the numerical risk scores the algorithm generates – the higher the number, the greater the risk – as they decide if a different social worker should go out to investigate the family.
But Oregon officials tweaked their original algorithm to only draw from internal child welfare data in calculating a family’s risk, and tried to deliberately address racial bias in its design with a “fairness correction.”
But Oregon officials have decided to ditch this following the AP investigation published in April (as well as a nudge from Senator Ron Wyden).
Oregon’s Department of Human Services announced to staff via email last month that after “extensive analysis” the agency’s hotline workers would stop using the algorithm at the end of June to reduce disparities concerning which families are investigated for child abuse and neglect by child protective services.
“We are committed to continuous quality improvement and equity,” Lacey Andresen, the agency’s deputy director, said in the May 19 email.
There’s no evidence Oregon’s tool resulted in disproportionate targeting of minorities, but the state obviously feels it’s better to get out ahead of the problem, rather than dig out of a hole later. It appears, at least from this report, the immensely important job of ensuring children’s safety will still be handled mostly by humans. And yes, humans are more prone to bias than software, but at least their bias isn’t hidden behind a wall of inscrutable code and is far less efficient than the slowest biased AI.