Psychological frameworks help AI models to provide better health care advice

Researchers at Technische Universität Berlin have discovered that teaching Large Language Models (LLMs) to mimic human intuition and reasoning significantly improves their ability to provide accurate medical care-seeking advice. The study, published in JMIR Biomedical Engineering from JMIR Publications, suggests a paradigm shift in prompt engineering: moving away from computer-focused instructions toward strategies rooted in applied psychology.

As millions of users turn to tools like ChatGPT for health advice, a persistent issue remains: AI often defaults to emergency or professional care recommendations, even for minor issues, out of extreme caution. This over-triage can lead to unnecessary healthcare costs and patient anxiety.

The breakthrough: Naturalistic decision-making (NDM)

The research team, led by Marvin Kopka and Markus A. Feufel, tested 10 different ChatGPT models (including the newest GPT-4o and GPT-5 series) using prompts inspired by Naturalistic Decision-Making (NDM). Unlike traditional logic, NDM focuses on how human experts make high-stakes decisions under uncertainty.

The study utilized two specific psychological frameworks:

  • Recognition-primed decision-making (RPD): Instructing the AI to match the patient's symptoms to "ypical cases and mentally simulate the outcome.

  • Data-frame theory: Tasking the AI to build a mental frame of the situation and constantly question it as new data emerges.

Key results

  • Significant accuracy boost: NDM-inspired prompts increased overall accuracy across all models. The most notable gains were in self-care advice, which jumped from a meager 13.4% with standard prompts to nearly 30% with NDM reasoning.

  • Activating "thinking" in simpler models: Non-reasoning models (which typically failed to identify self-care cases) began providing accurate, nuanced advice when given a "human reasoning blueprint."

  • Safety maintained: While the AI became better at identifying when it was safe to stay home, it maintained its high accuracy in identifying true emergencies.

"When testing AI, we too often give it perfect information and then see that it performs extremely well," said author Marvin Kopka. "But many problems in the real world are ill-defined. We have good models for how experts make decisions in such situations, so using them as prompts seemed like an obvious next step. I hope that applying human decision-making to LLMs will help us develop AI tools that are also useful in real-world decision-making."

Bridging the gap to personalized medicine

The study suggests that in real-world situations, where medical data is often messy or incomplete, a "reasoning blueprint" based on human cognition can be more effective than standard computational logic. By instructing the AI to simulate outcomes and question its own initial "frames" of a situation, the researchers were able to mitigate the common AI tendency toward over-caution.

While these findings mark a significant step forward in making LLMs more effective partners in clinical decision-making, the team notes that the model is currently best suited for controlled environments. Future research will be essential to determine if these NDM-inspired prompts translate into better decision support for everyday users in non-standardized settings.

Source:
Journal reference:

Kopka, M., & Feufel, M. A. (2025). Increasing LLM Accuracy for Care-Seeking Advice Using Prompts Reflecting Human Reasoning Strategies in the Real World: A Validation Study (Preprint). JMIR Biomedical Engineering. DOI: 10.2196/88053. https://biomedeng.jmir.org/2026/1/e88053

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Rail expansion alone does not guarantee lower medical costs