AI shouldn't decide who dies. It's neither human nor humane

09/23/24 at 03:00 AM

AI shouldn't decide who dies. It's neither human nor humane 
Fox News; by John Paul Kolcun and Anthony Digiorgio; 9/20/24 
[Opinion] As we write this, PubMed ... indexes 4,018 publications with the keyword "ChatGPT." Indeed, researchers have been using AI and large-language models (LLMs) for everything from reading pathology slides to answering patient messages. However, a recent paper in the Journal of the American Medical Association suggests that AI can act as a surrogate in end-of-life discussions. This goes too far. The authors of the paper propose creating an AI "chatbot" to speak for an otherwise incapacitated patient. To quote, "Combining individual-level behavioral data—inputs such as social media posts, church attendance, donations, travel records, and historical health care decisions—AI could learn what is important to patients and predict what they might choose in a specific circumstance." Then, the AI could express in conversant language what that patient "would have wanted," to inform end-of-life decisions. We are both neurosurgeons who routinely have these end-of-life conversations with patients’ families, as we care for those with traumatic brain injuries, strokes and brain tumors. These gut-wrenching experiences are a common, challenging and rewarding part of our job. Our experience teaches us how to connect and bond with families as we guide them through a life-changing ordeal. In some cases, we shed tears together as they navigate their emotional journey and determine what their loved one would tell us to do if they could speak. 

Back to Literature Review