The algorithm only recommended around 18% of black patients for follow-up care, versus 82% of white patients, even when both groups were similarly sick.
"Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients."
article: science.sciencemag.org/content/366/64…
author interview: statnews.com/2019/10/24/wid…
And for context, @m_sendhil and I first chatted almost a year ago, when work from his team showed that a commonly-used hospital algorithm could yield racially biased results (We'll be touching on this work today)
I’ve been thinking about this sentence for a week:
"The study’s authors then retrained a new algorithm using patients’ biological data, rather than the insurance claims data that the original program used, and found an 84% reduction in bias."
“The study’s authors then retrained a new algorithm using patients’ biological data, rather than the insurance claims data that the original program used, and found an 84% reduction in bias.”
$$$ obfuscates need.
Resist.
statnews.com/2019/10/24/wid… via @statnews
You may be familiar with @m_sendhil's research on scarcity. Now read a @statnews interview with him about his work on racial bias in medicine. Via @scchak
Widely used algorithm for follow-up care in hospitals is racially biased, study finds & @scchak explains. (Cue the Casablanca clip: I am shocked, **shocked**, . . . ) statnews.com/2019/10/24/wid… via @statnews