Calibrating AI reliance—A physician’s superhuman dilemma
Calibrating AI reliance—A physician’s superhuman dilemma
JAMA Health Forum; by Shefali V. Patil, Christopher G. Myers, Yemeng Lu-Myers; 3/25
Assistive artificial intelligence (AI) technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. Leveraging AI’s superior diagnostic accuracy in certain specialties, these assistive AI systems aim to reduce medical errors, while also promising to address physician fatigue by alleviating cognitive load and time pressures. Because human operators are perceived as having control over the technology’s use, responsibility unduly shifts to the human operator, even when clear evidence shows that the AI system produces erroneous outputs. Consequently, although scholars have proposed recommendations for shaping AI regulations, the reality is that in the absence of clear policies or established legal standards, future liability will largely hinge on societal perceptions of blameworthiness. This regulatory gap imposes an immense, almost superhuman, burden on physicians: they are expected to rely on AI to minimize medical errors, yet bear responsibility for determining when to override or defer to these systems.