Evaluating AI systems for motivational interviewing in chronic disease management

Changing health habits – like quitting smoking, exercising more, or sticking to prescribed treatments – is difficult but crucial for preventing and managing chronic diseases. Motivational interviewing (MI), a patient-centered counseling method that helps people find their own motivation to change, has proven effective across many health care settings.

Yet despite strong evidence, MI is not widely used in clinical practice due to challenges like limited time, training demands and payment barriers. Advances in artificial intelligence, however, are opening new possibilities to bring MI to more people through digital tools.

AI-powered chatbots, apps and virtual agents can simulate the supportive, empathetic conversations at the heart of MI. Using approaches ranging from scripted dialogues to advanced large language models like GPT-4 (commonly known as ChatGPT), these tools provide around-the-clock, judgment-free support. They may be especially helpful for people who do not seek traditional behavioral health care.

Early studies suggest these AI tools are feasible and acceptable, but it remains unclear how closely they follow core MI principles such as empathy and promoting autonomy, and whether they effectively change behaviors. Evaluating this "MI fidelity" is challenging, as traditional methods need detailed human review and don't scale well.

To fill these important knowledge gaps, researchers from Florida Atlantic University's Charles E. Schmidt College of Medicine conducted the first scoping review of studies on AI-driven systems designed to deliver motivational interviewing.

They focused on exploring how AI tools such as chatbots and large language models are being used to deliver MI, what is known about their usability and acceptability, the extent to which these systems adhere to core MI principles, and the behavioral or psychological outcomes reported so far.

Results, published in the Journal of Medical Internet Research, reveal that the most used AI tools were chatbots, with some virtual agents and mobile apps, using technologies ranging from rule-based systems to advanced models like GPT-3.5 and GPT-4. While all aimed to simulate motivational interviewing, the quality and rigor of their evaluations varied. Only a few studies addressed safety concerns around AI-generated content, with most not detailing safeguards against misinformation or inappropriate responses.

While only a few studies reported actual behavioral changes, most focused on important psychological factors like readiness to change and feeling understood. Importantly, no studies looked at long-term behavioral outcomes, and follow-up periods were often short or missing entirely. So, while AI tools can effectively deliver motivational content and influence early signs of change, their ability to create lasting behavior shifts remains unclear.

Many digital interventions included motivational 'elements' but didn't clearly show if or how they follow formal MI practices. We carefully mapped the specific techniques used – like open-ended questions, affirmations, and reflective listening – and looked at how fidelity was assessed, whether through expert review or study design. This level of detail is essential to understand what these AI tools are actually doing and how well they mirror true motivational interviewing."

Maria Carmenza Mejia, M.D., senior author and professor of population health, Schimdt College of Medicine

Findings show that despite their strengths, limitations around emotional nuance and conversational depth were commonly noted.

"Users appreciated the convenience and structure of AI systems but often missed the 'human touch' and complex relational dynamics of face-to-face counseling," said Mejia.

Participants in the studies varied widely from general adults to college students and patients with specific health conditions. Smoking cessation was the most common focus, followed by substance use reduction, stress management, and other health behaviors.

"AI-driven systems show exciting potential to deliver motivational interviewing and support meaningful health behavior change," said Mejia. "These tools are feasible and well-accepted across various health issues, demonstrating key principles like empathy and collaboration. However, few studies have rigorously evaluated their impact on behavior or fidelity. As AI health interventions evolve, future research must focus on robust evaluation, transparency and ethical responsibility. By blending scalable AI technology with proven behavioral frameworks, we can expand access and better support patients facing behavior change challenges."

Study co-authors are FAU medical students Zev Karve, Jacob Caley, Christopher Machado and Michelle K. Knecht, senior medical librarian, FAU Schmidt College of Medicine.

Source:
Journal reference:

Karve, Z., et al. (2025). New Doc on the Block: Scoping Review of AI Systems Delivering Motivational Interviewing for Health Behavior Change. Journal of Medical Internet Research. doi.org/10.2196/78417

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New recommendations aim to identify children with familial hypercholesterolemia