Oregon State Officials Dump Al Tool Used To Initiate Child Welfare Investigations

from the not-smarter,-better,-or-less-biased.-just-faster. dept

There’s plenty of human work to be done, but there never seems to be enough humans to do it. When things need to be processed in bulk, we turn it over to hardware and software. It isn’t better. It isn’t smarter. It’s just faster.

We can’t ask humans to process massive amounts of data because they just can’t do it well enough or fast enough. But they can write software that can perform tasks like this, allowing humans to do the other things they do best… like make judgment calls and deal with others humans.

Unfortunately, even AI can become mostly human, and not in the sentient, “turn everyone into paperclips” way it’s so often portrayed in science fiction. Instead, it becomes an inadvertent conduit of human bias that can produce the same results as biased humans, only at a much faster pace while being whitewashed with the assumption that ones and zeroes are incapable of being bigoted.

But that’s the way AI works, even when deployed with the best of intentions. Unfortunately, taking innately human jobs and subjecting them to automation tends to make societal problems worse than they already are. Take, for example, a pilot program that debuted in Pennsylvania before spreading to other states. Child welfare officials decided software should do some of the hard thinking about the safety of children. But when the data went in, the usual garbage came out.

According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children.

Fortunately, humans were still involved, which means not everything the AI spit out was treated as child welfare gospel.

The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.

But if the balance shifted towards more reliance on the algorithm, the results would be even worse.

​​If the tool had acted on its own to screen in a comparable rate of calls, it would have recommended that two-thirds of Black children be investigated, compared with about half of all other children reported, according to another study published last month and co-authored by a researcher who audited the county’s algorithm.

There are other backstops that minimize the potential damage caused by this tool, which the county relies on to handle thousands of neglect decisions a year. Workers are told not to use algorithmic output alone to instigate investigations. As noted above, workers are welcome to disagree with the automated determinations. And this only used to handle cases of potential neglect or substandard living conditions, rather than cases involving more direct harm like physical or sexual abuse.

Allegheny County isn’t an anomaly. More locales are utilizing algorithms to make child welfare decisions. The state of Oregon’s tool is based on the one used in Pennsylvania, but with a few helpful alterations.

Oregon’s Safety at Screening Tool was inspired by the influential Allegheny Family Screening Tool, which is named for the county surrounding Pittsburgh, and is aimed at predicting the risk that children face of winding up in foster care or being investigated in the future. It was first implemented in 2018. Social workers view the numerical risk scores the algorithm generates – the higher the number, the greater the risk – as they decide if a different social worker should go out to investigate the family.

But Oregon officials tweaked their original algorithm to only draw from internal child welfare data in calculating a family’s risk, and tried to deliberately address racial bias in its design with a “fairness correction.”

But Oregon officials have decided to ditch this following the AP investigation published in April (as well as a nudge from Senator Ron Wyden).

Oregon’s Department of Human Services announced to staff via email last month that after “extensive analysis” the agency’s hotline workers would stop using the algorithm at the end of June to reduce disparities concerning which families are investigated for child abuse and neglect by child protective services.

“We are committed to continuous quality improvement and equity,” Lacey Andresen, the agency’s deputy director, said in the May 19 email.

There’s no evidence Oregon’s tool resulted in disproportionate targeting of minorities, but the state obviously feels it’s better to get out ahead of the problem, rather than dig out of a hole later. It appears, at least from this report, the immensely important job of ensuring children’s safety will still be handled mostly by humans. And yes, humans are more prone to bias than software, but at least their bias isn’t hidden behind a wall of inscrutable code and is far less efficient than the slowest biased AI.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Oregon State Officials Dump Al Tool Used To Initiate Child Welfare Investigations”

Subscribe: RSS Leave a comment
17 Comments

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Hyman Rosen (profile) says:

Re: Re:

When it comes to their favored victim groups, woke ideologues are always there to firmly plant a thumb on the scale. Performance data on outcomes is their bête noire, and they despise AI systems for gathering such data and revealing truths they prefer to hide. They won’t rest until AI systems are programmed to yield the lies they want.

It’s always easier for them to target people than machines, when it’s people pointing out inconvenient facts: https://www.mindingthecampus.org/2018/03/15/why-a-penn-professor-was-vilified-for-telling-the-truth-about-race/

Mike Masnick (profile) says:

Re: Re: Re:

You may be the stupidest, most gullible commenter I’ve ever seen on this site Hyman. You don’t understand anything, yet believe you’re an expert. Your level of gullible ignorance, driven by grifters who are lying to you is just sad and pathetic. I don’t know how old you are, but I hope, one day you grow out of your pathetic life, and learn a little.

This comment has been flagged by the community. Click here to show it.

Hyman Rosen (profile) says:

Re: Re: Re:2

61. The way I’m most likely to grow out of my life is feet-first, although hopefully not for some time yet. I don’t find my life to be pathetic. It would be nice to identify as thin, fit, and not bald, but unlike certain people, when I look in the mirror I see what is really there.

You have fallen for a belief system that is patently false, have incorporated it into your identity, and therefore feel fury at anyone who dares oppose it. It’s the mindset that shipped people off to gulags and locked them in asylums for questioning Communism.

Your calling me names isn’t going to change anything. If you don’t like hearing opposing views, do what your favored platforms do; call it hate speech and ban it.

Mike Masnick (profile) says:

Re: Re: Re:3

You have fallen for a belief system that is patently false

A belief system cannot be “false,” but either way I don’t think you have the first idea what my “belief” system is. What I will say is that I believe in respecting others. You have shown you do not believe in that, and that’s enough for me to realize you are pathetic.

At the same time, you have this weird obsession with other people’s genitals. I find that creepy in the extreme.

Just because I respect people does not make me “a woke gender ideologue” which is the label you seem to throw on anyone who asks you to stop being a total asshole. “Woke gender ideologue” is a nonsense term, and generally only shows that whoever is saying it is a foolish, gullible person. Which is exactly what you’ve shown.

Imagine getting to the age of 61 and thinking the most important thing in life is making sure you know what everyone’s genitals are, and then deliberately looking to deny them basic dignity when there is extensive evidence in how doing so puts people in harms way.

It’s a sick perversion, Hyman. And I do hope one day you realize just how fucked up it truly is.

I’m not “woke” because that’s a nonsense term. But I do respect people’s privacy and I prefer to give people the dignity of referring to people as they wish to be referred to.

Your calling me names isn’t going to change anything.

I wasn’t calling you names to change things. I called you names because nothing else seems to get through your incredibly dense skull. The fact that you pollute my site with utter nonsense and your genital obsession is ridiculous.

I am going to ask you now to go away and do not comment here again. I am asking you politely this time. Going forward I may go further.

This is not because I don’t wish to hear opposing views. Yours is not an opposing view. Yours is a sick, perverted obsession, and it is frequently off topic, distracting, and obnoxious.

Go away. You can read this site all you want, but do not comment here any more unless you can behave yourself.

Hyman Rosen (profile) says:

Re: Re: Re:4

No, I think I’ll persist until you use force.

There is no respect in men to demanding to enter women’s single-sex spaces. It is not obsession with genitals to insist that this defining characteristic be the thing that prevents such admission. A belief system that says that men can be pregnant is false. It is perverse for schools to conceal mental health problems of children from their parents.

All of the above is so obviously true that it feels like anyone claiming the opposite must be literally insane. And the fury with which you and other commenters react, the insults, the name calling, all support that view – a shared delusion that is as thin as a soap bubble must be protected and nurtured, because it is so easily popped. It’s the same fury with which religions attack heretics, because this is a religion too, and as false as all of them.

That Anonymous Coward (profile) says:

If only they had built in a system that allowed for new input when a human disagreed with the score that would help the code adapt better.

AI might work great for detecting suspicious growths because its comparing apples to apples in ever case.
It learned from 2 billion pictures of what cancer can look like and it can flag some scans for human review.
If the reviewer discovered a series of selected images that definitely weren’t cancer, they would have no problem sounding the alarm that its gone stupid.

AI is not like in the movies, it is not all knowing and perfect. You can’t accept anything it says with 100% certainty unless its only comparing single point to single point.
With the number of points to predict if child welfare should get involved, there is no where near enough data to feed it to cover all of the points fully.
Then one needs to remove the ‘black swan’ cases where something completely unexpected happened 1 time that might never repeat because it was so from left field.

But even then how do you compare parents to each other, I know survivors of abuse who are fantastic parents but the common thinking is that it will repeat.
I know people who had perfect childhoods who I would never leave alone with a child.
Not every child of an alcoholic grows up to be a drunk, so the data points aren’t cut & dried.

This is one of those times where TACs dream of them living with it first would have been useful.
Imagine the whole child welfare staff from top to bottom fed into the system to see which of them it thinks needs a visit from the state to protect their child…

OGquaker says:

Child Welfare Services is 95% B.S.

As part of the duty, and the imagined “good standing” of me, running a church in an all-black hood for 30 years, i have spent decades as a finger-printed, interviewed, called before the Child-Welfare-Court Judge, Court appointed “Monitor”.

Hundreds of days spent as the assigned “monitor” of a 2 hour by-monthly parent visit with their temporary-foster-home-detained “at-risk” child or children.

One year, the temporary-foster-home was a three hour drive (each way) from the family’s home & public school teaching job. Sometimes, a parent would buy me a meal..

After the “Adoption and Safe Families Act” of 1997 the Fed reimburses the States 100% of the $$ for removing and storing & paperwork of an “at risk” child AWAY FROM THEIR HOUSE for at least 15 months, and forces loss of the parent’s rights to the child, except in specific circumstance. Department of Children and Family Services became a growth industry, with no down side: doubling US child placement numbers in 20 years.

Mixed-race child? BINGO!

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...