Prioritizing patient outcomes to regulate artificial intelligence in health care

NewsGuard 100/100 Score

Ever wonder if the latest and greatest artificial intelligence (AI) tool you read about in the morning paper is going to save your life? A new study published in JAMA led by John W. Ayers, Ph.D., of the Qualcomm Institute within the University of California San Diego, finds that question can be difficult to answer since AI products in healthcare do not universally undergo any externally evaluated approval process assessing how it might benefit patient outcomes before coming to market.

The research team evaluated the recent White House Executive Order that instructed the Department of Health and Human Services to develop new AI-specific regulatory strategies addressing equity, safety, privacy, and quality for AI in healthcare before April 27, 2024. However, team members were surprised to find the order did not once mention patient outcomes, the standard metric by which healthcare products are judged before being allowed to access the healthcare marketplace. 

The goal of medicine is to save lives. AI tools should prove clinically significant improvements in patient outcomes before they are widely adopted."

Davey Smith, M.D., head of the Division of Infectious Disease and Global Public Health at UC San Diego School of Medicine, co-director of the university's Altman Clinical and Translational Research Institute, and study senior author

According to the team, AI-powered early warning systems for sepsis, a fatal acute illness among hospitalized patients that affects 1.7 million Americans each year, demonstrate the consequences of inadequate prioritization of patient outcomes in regulations. A third-party evaluation of the most widely adopted AI sepsis prediction model revealed 67% of patients who developed sepsis were not identified by the system. Would hospital administrators have chosen this sepsis prediction system if trials assessing patient outcomes data were mandated, the team wondered, considering the array of available early warning systems for sepsis?

"We are calling for a revision to the White House Executive Order that prioritizes patient outcomes when regulating AI products," added John W. Ayers, Ph.D., who is deputy director of informatics in Altman Clinical and Translational Research Institute in addition to his Qualcomm Institute affiliation. "Similar to pharmaceutical products, AI tools that impact patient care should be evaluated by federal agencies for how they improve patients' feeling, function, and survival."

The team points to its 2023 study in JAMA Internal Medicine on using AI-powered chatbots to respond to patient messages as an example of what patient outcome-centric regulations can achieve. "A study comparing standard care versus standard care enhanced by AI conversational agents found differences in downstream care utilization in some patient populations, such as heart failure patients," said Nimit Desai, B.S., who is a research affiliate at the Qualcomm Institute, UC San Diego School of Medicine student, and study coauthor. "But studies like this don't just happen unless regulators appropriately incentivize them. With a patient outcomes-centric approach, AI for patient messaging and all other clinical applications can truly enhance people's lives."

The team recognizes that its proposed regulatory strategy can be a significant lift for AI and healthcare industry partners and may not be necessary for every flavor of AI use case in healthcare. However, the researchers say, excluding patient outcomes-centric rules in the White House Executive Order is a serious omission. 

Source:
Journal reference:

Ayers, J. W., et al. (2024). Regulate Artificial Intelligence in Health Care by Prioritizing Patient Outcomes. JAMA. doi.org/10.1001/jama.2024.0549.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI in healthcare shows promise in trials but needs real-world testing to ensure effectiveness