Racial differences in pain assessment and false beliefs about race in AI models

10/19/24 at 03:30 AM

Racial differences in pain assessment and false beliefs about race in AI models
JAMA Network Open; Brototo Deb, MD, MIDS; Adam Rodman, MD, MPH; 10/24
Physicians undertreat Black patients’ pain compared with White patients, irrespective of setting and type of pain, likely from underassessment of pain and undertreatment of pain on recognition. Large language models (LLMs) encode racial and ethnic biases and may perpetuate race and ethnicity–based medicine. Although LLMs rate pain similarly between races and ethnicities, they underestimate pain among Black individuals in the presence of false beliefs. Given LLMs’ significant abilities in assisting with clinical reasoning, as well as a human tendency toward automation bias, these biases could propagate race and ethnicity–based medicine and the undertreatment of pain in Black patients. Mitigating these biases involves many strategies during dataset preparation, training, and posttraining stages.

Back to Literature Review