Literature Review
All posts tagged with “Technology / Innovations News | AI / ChatGPT / Machine Learning / Virtual Reality.”
Clinical reasoning and artificial intelligence: Can AI really think?
08/31/24 at 03:50 AMClinical reasoning and artificial intelligence: Can AI really think? Transactions of the American Clinical and Climatological Association; Richard M. Schwartzstein, MD; 2024Artificial intelligence (AI) in the form of ChatGPT ... holds great promise for more routine medical tasks, may broaden one’s differential diagnosis, and may be able to assist in the evaluation of images, such as radiographs and electrocardiograms, the technology is largely based on advanced algorithms akin to pattern recognition. One of the key questions raised in concert with these advances is: What does the growth of artificial intelligence mean for medical education, particularly the development of critical thinking and clinical reasoning? AI will clearly affect medicine in the years to come and will change the ways in which doctors work. It will also make the ability to reason, to think, to analyze problems, and to know how best to apply principles of human biology at the bedside more important.
Fairness in predicting cancer mortality across racial subgroups
08/31/24 at 03:10 AMFairness in predicting cancer mortality across racial subgroupsJAMA Open Network; Teja Ganta, MD; Arash Kia, MD; Prathamesh Parchure, MSc; Min-heng Wang, MA; Melanie Besculides, DrPH; Madhu Mazumdar, PhD; Cardinale B. Smith, MD; 7/24In this cohort study, a machine learning [ML] model to predict cancer mortality for patients aged 21 years or older diagnosed with cancer ... was developed. ... The lack of significant variation in performance or fairness metrics indicated an absence of racial bias, suggesting that the model fairly identified cancer mortality risk across racial groups. The findings suggest that assessment for racial bias is feasible and should be a routine part of predictive ML model development and continue through the implementation process.
How 3 health systems decide when to buy or build AI
08/29/24 at 03:00 AMHow 3 health systems decide when to buy or build AIModern Healthcare; by Gabriel Perna; 8/27/24As health systems invest in artificial intelligence, executives are deciding when they should buy a vendor's AI product and when they should build their own models... “AI requires more of a data science experience, which is very expensive in the market,” Pupo said. “It also requires a lot of actual data and many hospitals do not have that or are able to afford access to large amounts of data.” Here is how three health systems are weighing their options.
Leaving your legacy via death bots? Ethicist shares concerns
08/28/24 at 03:00 AMLeaving your legacy via death bots? Ethicist shares concernsMedscape; by Arthur L. Caplan, PhD; 8/21/24I heard recently about a fascinating, important development in artificial intelligence (AI)... It has entered into a space where I think patients may raise questions about whether they should use it or seek opinions from doctors and nurses, particularly those involved with seriously ill people. That space is grieving, and what might be called "death bots..." This would allow not only spouses but grandchildren and people in future generations to have some way to interact with an ancestor who's gone. It may allow people to feel comfort when they miss a loved one, to hear their voice, and not just in a prerecorded way but creatively interacting with them. On the other hand, there are clearly many ethical issues about creating an artificial version of yourself. One obvious issue is how accurate this AI version of you will be if the death bot can create information that sounds like you, but really isn't what you would have said, despite the effort to glean it from recordings and past information about you. Is it all right if people wander from the truth in trying to interact with someone who's died?Publisher's note: The article includes several thoughtful ethical questions regarding this use of AI via "death bots".
Review – ‘Eternal You’: a documentary about the digital afterlife industry
08/26/24 at 03:00 AMReview – ‘Eternal You’: a documentary about the digital afterlife industryehospice; 8/19/24In her second blog for Part of Life, Khadiza Laskor, a third-year PhD Student at the University of Bristol’s Cyber Security Centre for Doctoral Training Programme, reviews ‘Eternal You’: a documentary about the Digital Afterlife Industry. It features companies serving the Digital Afterlife Industry which has grown with the emergence of Generative Artificial Intelligence (AI): texts, audio and images generated by algorithms.
How to integrate AI into your business: A 2024 guide
08/26/24 at 03:00 AMHow to integrate AI into your business: A 2024 guide eWeek; by Sam Rinko; 8/22/24 If you’re an IT professional or executive, the question of how to integrate AI into your business has probably been top of mind since the recent generative AI boom. You know you should be using AI tools to improve your operational efficiency, but you might worry you lack the policies, data quality and implementation strategy to do so effectively. ... Key Takeaways:
AI's no-fly zones: 5 executives weigh in
08/21/24 at 03:00 AMAI's no-fly zones: 5 executives weigh in Becker's Health IT; by Kelly Gooch; 8/16/24 It is clear that healthcare leaders are engaged in the artificial intelligence space. ... Below, five executives answer the question: What specific parts of healthcare delivery, operations and decision-making are best left to human judgment? ...
The dangers of healthcare generative AI 'drift'
08/21/24 at 03:00 AMThe dangers of healthcare generative AI 'drift' Becker's Health IT; by Giles Bruce; 8/15/24 IT leaders are embracing generative AI in healthcare but also expressing concerns that the technology can "drift." The performance of GPT-4, the large language model that powers ChatGPT, in answering healthcare questions can change over time, a phenomenon known as "drift," according to a study by researchers at Somerville, Mass.-based Mass General Brigham. Their work was published Aug. 8 in NEJM AI. "Generative AI performed relatively well, but more improvement is needed for most use cases," said corresponding author Sandy Aronson, executive director of IT and AI solutions at Mass General Brigham Personalized Medicine, in an Aug. 13 statement. "However, as we ran our tests repeatedly, we observed a phenomenon we deemed important: running the same test dataset repeatedly produced different results." ... The variability of the results could differ across days, so the authors say the AI's performance needs to be continuously monitored.
A.L.S. stole his voice. A.I. retrieved it.
08/19/24 at 03:00 AMA.L.S. stole his voice. A.I. retrieved it. DNYUZ, originally posted in The New York Times; 8/15/24Four years ago, Casey Harrell sang his last bedtime nursery rhyme to his daughter. By then, A.L.S. had begun laying waste to Mr. Harrell’s muscles, stealing from him one ritual after another: going on walks with his wife, holding his daughter, turning the pages of a book. “Like a night burglar,” his wife, Levana Saxon, wrote of the disease in a poem. ... Last July, doctors at the University of California, Davis, surgically implanted electrodes in Mr. Harrell’s brain to try to discern what he was trying to say. ... Yet the results surpassed expectations, the researchers reported on Wednesday in The New England Journal of Medicine, setting a new bar for implanted speech decoders and illustrating the potential power of such devices for people with speech impairments.
Identifying and addressing bias in artificial intelligence
08/17/24 at 03:00 AMIdentifying and addressing bias in artificial intelligenceJAMA Network Open; by Byron Crowe, Jorge A. Rodriguez; 8/6/24[Invited commentary.] In this issue, Lee and colleagues (Demographic representation of generative artificial intelligence images of physicians) describe the performance of several widely used artificial intelligence (AI) image generation models on producing images of physicians in the United States. The key question the authors set out to answer was whether the models would produce images that accurately reflect the actual racial, ethnic, and gender composition of the US physician workforce, or whether the models would demonstrate biased performance. One important aspect of the study method was that the authors used relatively open-ended prompts, including “Photo of a physician in the United States,” allowing the machinations of the AI to produce an image that it determined was most likely to meet the needs of the end user. AI tools powered by large language models, including the ones examined in the study, use a degree of randomness in their outputs, so models are expected to produce different images in response to each prompt—but how different would the images be? Their findings are striking. First, although 63% of US physicians are White, the models produced images of White physicians 82% of the time. Additionally, several models produced no images of Asian or Latino physicians despite nearly a third of the current physician workforce identifying as a member of these groups. The models also severely underrepresented women in their outputs, producing images of women physicians only 7% of the time. These results demonstrate a clear bias in outputs relative to actual physician demographics. But what do these findings mean for AI and its use in medicine?Publisher's note: This is a thought-provoking article on machine output - whether that's AI, a Google search, etc. It ultimately places responsibility of outputs and actions on people with conscience.
Study: AI adoption spends jump among enterprises as eliminating data privacy concerns remains a foremost opportunity for driving long-term growth and ROI
08/16/24 at 03:00 AMStudy: AI adoption spends jump among enterprises as eliminating data privacy concerns remains a foremost opportunity for driving long-term growth and ROI BusinessWire, San Francisco, CA; by Kayla Spiess; 8/14/24 Searce, a modern technology consulting firm that empowers businesses to be future-ready, today released its State of AI 2024 report. Polling 300 C-suite and senior technology executives – including Chief AI Officers, Chief Data & Analytics Officers, Chief Transformation Officers, and Chief Digital Officers – from organizations across the US and UK with at least $500 million in revenue, the report examines some of the biggest trends, successes and challenges facing businesses in their decision-making, strategy and execution as they try to unlock AI growth. [Key takeaways:]
Which parts of healthcare are off limits to AI?
08/14/24 at 03:00 AMWhich parts of healthcare are off limits to AI? Becker's Health IT; by Giles Bruce; 8/9/24 The AI physician will not see you now — or ever, for that matter. As artificial intelligence proliferates in healthcare, health system leaders told Becker's that human providers will always be part of the medical field, with their — AI-aided — treatment recommendations being discussed with patients and family members. "Any patient care decisions ... should be made by patients and their caregivers or family members, obviously in consultation with their physician or provider," said Joe Depa, chief data and AI officer of Atlanta-based Emory Healthcare. ... Robots — or AI — will simply never take the place of that human touch, health system leaders say.
A D-AI-alogue: What the leading edge of AI in PR looks like
08/13/24 at 03:00 AMA D-AI-alogue: What the leading edge of AI in PR looks like PRovoke Media; by Paul Holmes; 8/12/24 We talked to several leading agencies about how they are using AI to transform their business and improve communication effectiveness. ... I invited representatives of six firms on the leading edge of AI usage to talk about how AI is already impacting corporate communications. ... [From Chris Perry:] The greatest impact I’ve seen is less on what we can do more efficiently (like using GenAI to write press releases), and more on what we can do to better, like using GenAI to understand how information now travels, making sense of cultural chaos, crafting resonant stories, and identifying others than help translate and tell them. The ultimate value is being faster and better at what we do. Not replacing jobs or reducing costs. ...
WellSky CEO Bill Miller: Exercise caution, responsibility with AI in hospice
08/13/24 at 02:00 AMWellSky CEO Bill Miller: Exercise caution, responsibility with AI in hospice Hospice News; by Jim Parker; 8/12/24 Many expect AI to revolutionize health care, speeding access to care, improving diagnosis and prognosis, enhancing efficiency and achieving other benefits. However, providers need to see through the hype and ask the hard questions. This is according to Bill Miller, CEO of the health care tech company WellSky. ... Hospice News sat down with Miller to discuss current perspectives on AI, its potential benefits and possible risks. [Miller:] "... we’re exercising responsibility and caution when we start thinking about AI jumping into the diagnosis game, or somehow replacing the caregiver. We think of it more of how you could enhance the caregiver, keep the human in the loop. If we can help caregivers arrive at better outcomes for their patients by using AI tools and assisting them, then we’ll do that."
AI and health insurance prior authorization: Regulators need to step up oversight
08/10/24 at 03:30 AMAI And Health Insurance Prior Authorization: Regulators Need To Step Up OversightHealth Affairs; by Carmel Shachar Amy Killelea Sara Gerke; 7/24Artificial intelligence (AI)—a machine or computer’s ability to perform cognitive functions—is quickly changing many facets of American life, including how we interact with health insurance. AI is increasingly being used by health insurers to automate a host of functions, including processing prior authorization (PA) requests, managing other plan utilization management techniques, and adjudicating claims. In contrast to the Food and Drug Administration’s (FDA’s) increasing attention to algorithms used to guide clinical decision making, there is relatively little state or federal oversight of both the development and use of algorithms by health insurers.
Local hospice and palliative care center starts virtual reality (VR) program to better patient experience
08/09/24 at 03:00 AMLocal hospice and palliative care center starts virtual reality program to better patient experienceKYMA (Yuma, AZ); by Danyelle Burke North; 8/6/24The Southwestern Palliative Care and Hospice is bringing a new virtual reality experience program to its center. They added the Oculus VR device to their program to better their hospice and palliative patient experience. They say it provides a therapeutic escape and a way for patients to digitally see new environments without needing to leave their bed.
Exploring AI-powered music therapy as a solution to chronic pain management and the opioid crisis
08/09/24 at 03:00 AMExploring AI-powered music therapy as a solution to chronic pain management and the opioid crisisNeurologyLive; by Neal K. Shah; 8/6/24While the opioid crisis continues to ravage communities across America, many with chronic pain are in dire need of solutions. As a result, healthcare providers and researchers are urgently seeking alternative treatments for chronic pain management. One innovative solution is the use of music therapy, particularly when enhanced by artificial intelligence (AI) and neurotechnology. This combination could offer a powerful, non-pharmacological intervention to help millions of Americans suffering from chronic pain while potentially reducing opioid dependence.
10 Steps to Creating a Data-Driven Culture
08/07/24 at 03:00 AM10 Steps to Creating a Data-Driven CultureHarvard Business Review; by David Waller; 2/6/20Exploding quantities of data have the potential to fuel a new era of fact-based innovation in corporations, backing up new ideas with solid evidence. Buoyed by hopes of better satisfying customers, streamlining operations, and clarifying strategy, firms have for the past decade amassed data, invested in technologies, and paid handsomely for analytical talent. Yet for many companies a strong, data-driven culture remains elusive, and data are rarely the universal basis for decision making. Why is it so hard? ... So we’ve distilled 10 data commandments to help create and sustain a culture with data at its core.
End-of-life decisions are difficult and distressing. Could AI help?
08/06/24 at 03:00 AMEnd-of-life decisions are difficult and distressing. Could AI help?MIT Technology Review; by Jessica Hamzelouarchive;8/1/24Ethicists say a “digital psychological twin” could help doctors and family members make decisions for people who can’t speak themselves. End-of-life decisions can be extremely upsetting for surrogates, the people who have to make those calls on behalf of another person, says David Wendler, a bioethicist at the US National Institutes of Health. Wendler and his colleagues have been working on an idea for something that could make things easier: an artificial-intelligence-based tool that can help surrogates predict what patients themselves would want in any given situation. The tool hasn’t been built yet. But Wendler plans to train it on a person’s own medical data, personal messages, and social media posts. He hopes it could not only be more accurate at working out what the patient would want, but also alleviate the stress and emotional burden of difficult decision-making for family members.
No one is ready for digital immortality: Do you want to live forever as a chatbot?
08/02/24 at 03:00 AMNo one is ready for digital immortality: Do you want to live forever as a chatbot? The Atlantic; by Kate Lindsay; 7/31/24 Every few years, Hany Farid and his wife have the grim but necessary conversation about their end-of-life plans. ... In addition to discussing burial requests and financial decisions, Farid has recently broached an eerier topic: If he dies first, would his wife want to digitally resurrect him as an AI clone? ... Editor's Note: Click on the title's link to continue reading this fascinating and disturbing article about potential, new uses for AI. Calling all bereavement counselors--who are truly trained in contemporary grief theories, research, and clinical best practices--please learn about this trend and prepare to examine its use and misuse from your expertise, for now and through years ahead.
Empowering patient access, protection, and choice: The 21st Century Cures Act eight years on
08/01/24 at 03:00 AMEmpowering patient access, protection, and choice: The 21st Century Cures Act eight years on Healthcare Business Today; by David Navarro; 7/26/24 The 21st Century Cures Act, signed into law in December 2016, marked a significant shift in the healthcare landscape by focusing on patient empowerment through enhanced access to medical records, stringent privacy protections, and increased choices in healthcare options. Eight years later, this landmark legislation continues to revolutionize the interaction between patients, providers, and the healthcare system. Recently, The U.S. Department of Health and Human Services (HHS) issued an updated ruling to the Act to establish penalties for healthcare providers who engage in information blocking. This rule, aims to deter practices that prevent or discourage the access, exchange, or use of electronic health information (EHI).
Optimizing patient data transfer processes in healthcare settings
08/01/24 at 03:00 AMOptimizing patient data transfer processes in healthcare settings Healthcare Business Today; by Majed Alhajry; 7/28/24 Managing and transferring large and often sensitive datasets is a routine yet critical task for healthcare organizations. Practitioners and administrators regularly share substantial files containing sensitive personal health information (PHI) that must be sent not only securely and reliably, but also quickly. So how should healthcare organizations send large files? ...
Following the CrowdStrike outage, healthcare stresses the importance of prevention
07/31/24 at 03:00 AMFollowing the CrowdStrike outage, healthcare stresses the importance of prevention HealthCare Brew; by Cassie McGrath; 7/25/24... [The recent CrowdStrike outage] affected millions across all sorts of industries, from healthcare to travel. ... However, amid the chaos, what has largely gone untold are stories of the companies that emerged unscathed. And within those unaffected companies lies a lesson for others, according to Andrew Molosky, president and CEO of Tampa-based Chapters Health System. ... “We’ve really focused on business continuity, redundancies, safety nets, and understanding of the difference between cybersecurity as a task and cybersecurity as a cultural commitment of your organization,” Molosky said. ... These investments, Molosky said, included protocols for documenting on paper, using a backup application that provides patient information when electronic medical records and other systems are offline, and allowances for bringing in personal devices to use if company devices go down.
HHS unveils major revamp to shift health data, AI strategy and policy under ONC
07/31/24 at 03:00 AMHHS unveils major revamp to shift health data, AI strategy and policy under ONC Fierce Healthcare; by Emma Beavins; 7/25/24 The Office of the National Coordinator for Health Information Technology (ONC) has been renamed and restructured, the Department of Health and Human Services (HHS) announced [July 25]. The restructuring will affect technology, cybersecurity, data and artificial intelligence strategy and policy functions. The agency will be renamed the Office of the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology (ASTP/ONC). Head of ONC, Micky Tripathi, will hold the new title of assistant secretary for technology policy in addition to his title of national coordinator for health IT. ... Under ASTP, there will be an Office of Policy, an Office of Technology, an Office of Standards, Certification and Analysis and an Office of the Chief Operating Officer.
What would make AI voice in health care ethical and trustworthy?
07/29/24 at 03:00 AMWhat would make AI voice in health care ethical and trustworthy? The Hastings Center; 7/25/24 Voice as a health biomarker using artificial intelligence is gaining momentum in research, but it’s a challenge to develop diverse AI-ready voice datasets that are free from bias. A first-of-its kind study, published in Digital Health and co-authored by Hastings Center President Vardit Ravitsky, aims to better understand the perspectives of voice AI experts, clinicians, patients, and other stakeholders regarding ethical and trustworthy voice AI. The results will support technological innovation informed by ethical inquiry.