Large language models in medicine: Current limitations and future scope

NewsGuard 100/100 Score

In a recent study published in Communications Medicine, researchers provided a comprehensive overview of the potential and limitations of artificial intelligence (AI)-based large language models (LLMs) in medical research, education, and clinical practice.

Study: The future landscape of large language models in medicine. Image Credit: a-image/Shutterstock.comStudy: The future landscape of large language models in medicine. Image Credit: a-image/Shutterstock.com

Background

These tools generate text (language) on a topic, akin to human responses, based on a prompt, such as a set of keywords or user queries. LLM tools can even generate text in specific styles, like poetry.

They exhibit an exceptional ability to respond to diverse questions and handle complex concepts. Until recently, commercial companies like OpenAI/Microsoft, Meta, and Google led the development of LLMs.

OpenAI's release of ChatGPT in November 2022 marked a significant advancement in terms of credibility, accessibility, and human-like output based on reinforcement learning from human feedback (RLHF).

Subsequently, Google and Meta introduced their own LLMs, expanding the possibilities with features like visual input and plugins.

In this context, GPT-4, developed with further RLHF from ChatGPT, successfully surpassed the medical licensing exam (USMLE) requirements, indicating its potential in the medical field.

However, the rapid increase in LLM applications has raised concerns about their potential misuse, particularly in the medical domain.

LLMs in medicine

The study explored three key areas where LLMs could find applications in medicine: patient care, medical research, and medical education. Effective communication in patient care is crucial, with healthcare professionals often using written text to interact with patients, including medical records and diagnostic results.

LLMs have the potential to improve communication by simplifying medical language and can be particularly useful in addressing conditions with social stigmas, such as sexually transmitted diseases.

Chatbots like First Derm and Pahola already assist doctors in assessing and guiding patients with skin conditions and alcohol abuse, though they may require further improvements in functionality and acceptance by medical professionals.

On the other hand, LLMs excel at translating medical terminology into different languages, aiding clinical decision-making, therapy adherence, and clinical documentation. They can provide structured formats for unstructured notes, potentially reducing the workload for clinicians.

In medical research, LLMs can help with scientific content production, summarizing scientific concepts, and aiding scientists and clinicians with limited technical skills in testing hypotheses and visualizing large datasets. The dynamic updating of scientific models can improve research performance.

In medical education, where the focus is on critical thinking and problem-solving, LLMs can act as personalized teaching assistants, offering interactive learning simulations, breaking down complex concepts, and helping students practice making diagnoses and treatment strategies.

However, their use in education requires careful management to prevent hindrance to critical thinking and creativity.

LLMs in educational settings should be regulated transparently to avoid misinformation and externalization of medical reasoning, especially in medical education, where it could lead to potentially harmful clinical decisions.

Addressing the issue

Despite advancements, the issue of LLMs spreading misinformation remains a concern, particularly in clinical settings. Establishing a legal framework for the use of LLMs in clinical practice is essential.

Non-commercial open-source LLM projects could also be a valuable contribution to this effort. Additionally, LLM application programming interfaces (APIs) should be secured to protect sensitive data, and researchers should focus on the quality of input data to improve LLM output.

Conclusions

LLMs hold promise in the medical domain, but they come with challenges that must be addressed.

Concerns about misinformation, bias, validity, safety, and ethics need to be carefully considered before widespread adoption in clinical practice.

Journal reference:
Neha Mathur

Written by

Neha Mathur

Neha is a digital marketing professional based in Gurugram, India. She has a Master’s degree from the University of Rajasthan with a specialization in Biotechnology in 2008. She has experience in pre-clinical research as part of her research project in The Department of Toxicology at the prestigious Central Drug Research Institute (CDRI), Lucknow, India. She also holds a certification in C++ programming.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Mathur, Neha. (2023, October 12). Large language models in medicine: Current limitations and future scope. News-Medical. Retrieved on April 27, 2024 from https://www.news-medical.net/news/20231012/Large-language-models-in-medicine-Current-limitations-and-future-scope.aspx.

  • MLA

    Mathur, Neha. "Large language models in medicine: Current limitations and future scope". News-Medical. 27 April 2024. <https://www.news-medical.net/news/20231012/Large-language-models-in-medicine-Current-limitations-and-future-scope.aspx>.

  • Chicago

    Mathur, Neha. "Large language models in medicine: Current limitations and future scope". News-Medical. https://www.news-medical.net/news/20231012/Large-language-models-in-medicine-Current-limitations-and-future-scope.aspx. (accessed April 27, 2024).

  • Harvard

    Mathur, Neha. 2023. Large language models in medicine: Current limitations and future scope. News-Medical, viewed 27 April 2024, https://www.news-medical.net/news/20231012/Large-language-models-in-medicine-Current-limitations-and-future-scope.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Ginseng's hidden gems: Rare ginsenosides emerge as potent players in the future of medicine