Oregon: child protective services to stop using trained algorithm to detect child maltreatment

0
Oregon: child protective services to stop using trained algorithm to detect child maltreatment

At the University Hospital of Dijon, researchers are currently testing an algorithm to screen for child abuse by identifying pathologies and injuries during hospitalizations of very young children. In the US, in many states, screening tools are already being used by child protective services but are proving to be pernicious: trained with data such as mental health, substance abuse, prison stays, they would target black families. So, while convinced that AI can help, Oregon has just announced the discontinuation of the algorithm currently used to decide whether a family investigation is necessary.

When a report of child abuse or neglect is made, social workers are required to conduct an investigation to preserve the life of that child.
In the U.S., as child welfare agencies use or consider implementing algorithms, an AP (Associated Press) investigation has highlighted issues regarding transparency, reliability, and racial disparities in the use of AI, including its potential to reinforce bias in the child welfare system.

Allegheny County, Pennsylvania’s algorithm

The algorithm currently in use in Oregon is modeled after the Allegheny County algorithm, which a team at Carnegie Mellon University conducted research on that was accessed by the AP. Allegheny’s algorithm flagged a disproportionate number of black children for a “mandatory” neglect investigation, compared to white children. In fact, independent researchers were able to find that social workers disagreed with 1/3 of the risk scores the algorithm produced.

The algorithm was trained to predict a child’s risk of entering foster care within two years using detailed personal data collected since birth, health insurance, substance abuse, mental health, prison stays and probation records, among other government data sets. The algorithm then calculates a risk score from 1 to 20: the higher the number, the greater the risk. Neglect, for which this algorithm was trained, can include many criteria from inadequate housing to lack of hygiene, but similar tools could be used in other child welfare systems with minimal or no human intervention, in the same way that algorithms have been used to make decisions in the criminal justice system in the U.S. and thus could reinforce existing racial disparities in the child welfare system.

One of the research team members stated:

“If the tool had acted on its own to screen for a comparable call rate, it would have recommended that two-thirds of black children be investigated, compared to about half of all other reported children.”

Oregon’s abandonment of the algorithm

A few weeks after these findings, the Oregon Department of Human Services announced to its staff via email in May that ” After a ‘thorough analysis,’ agency hotline workers would discontinue use of the algorithm in late June to reduce disparities regarding families under investigation for child abuse and neglect by child protective services.”
Lacey Andresen, director of the agency, said:

“We are committed to continuous improvement in quality and equity.

Oregon Democratic Sen. Ron Wyden says he is concerned about the increasing use of artificial intelligence tools in child protective services.
He said in a statement:

“Making decisions about what should happen to children and families is far too important a task to give untested algorithms. I am pleased that the Oregon Department of Human Services is taking the concerns I have raised about racial bias seriously and suspending the use of its screening tool.

Translated from Oregon : les services de protection de l’enfance vont cesser d’utiliser un algorithme entraîné pour détecter la maltraitance des enfants