For those of you who read the New York Times piece on Allegheny County's child abuse prediction model tool, this article is an important companion piece, and offers some insight into why the use of these tools actually doubles down on bias, instead of freeing the referral process from biases against poverty, race, and other factors. It's an important read. The excerpt below is especially telling about the problems with the Allegheny model, as well as with reliance on algorithms in child welfare referral decision making generally:
"The AFST’s predictive variables are drawn from a limited universe of data that includes only information on public resources. The choice to accept such limited data reflects the human discretion embedded in the model—and an assumption that middle-class families deserve more privacy than poor families."
You can read the article in its entirety here.