Using ChatGPT to address vaccine hesitancy: a promising tool for guiding users to scientific information?

In a recent study published in Human Vaccines & Immunotherapeutics, a group of researchers assessed Chat Generative Pre-trained Transformer (GPT)'s ability to address the 50 most common misconceptions about vaccine safety and evaluate its alignment with established scientific evidence.

Chatting with ChatGPT to learn about safety of COVID-19 vaccines – A perspective
Study: Chatting with ChatGPT to learn about safety of COVID-19 vaccines – A perspective. Viacheslav Lopatin/Shutterstock.com

Background

In 2019, the World Health Organization (WHO) identified vaccine hesitancy, intensified by misinformation on social media, as a top global health threat. This hesitancy risks undoing the strides made against vaccine-preventable diseases.

Particularly in Europe, confidence in vaccines was notably low in 2016. ChatGPT, OpenAI’s artificial intelligence (AI)-driven chatbot, offers interactive responses using advanced language processing.

Despite its utility, there are concerns about its misuse, especially academically, prompting regulatory measures. Its wide accessibility also raises concerns about potential misuse in educational contexts.

The WHO's Vaccine Safety branch in Spain combats vaccination myths and misinformation. Given these circumstances, ChatGPT's accuracy on Coronavirus Disease 2019 (COVID-19) vaccine safety information needs evaluation.

About the study

The researchers aimed to evaluate the capability of the AI to produce responses aligned with established scientific evidence.

To do this, they presented the AI with the 50 most common questions directed to WHO-CC-VSS (Collaborating Center for Vaccine Safety at the University of Santiago de Compostela).

These questions were grouped into three main categories. The first focused on misconceptions about safety, including queries about messenger ribonucleic acid (mRNA) vaccines integrating into the human genome or vaccines causing long COVID. The second category was about false contraindications concerning issues like vaccinating immunosuppressed individuals or breastfeeding mothers.

The last category touched upon genuine contraindications, safety alerts, or precautions associated with reactions like anaphylaxis or myocarditis.

Three experts from WHO-CC-VSS, namely Siddhartha Sankar Datta, IR-C, and FM-T, took the responsibility of independently assessing the AI's responses. Their evaluations considered the accuracy and specificity of the answers in comparison to the latest scientific findings and recommendations provided by the WHO and other leading international bodies.

The significance of this assessment is heightened by the fact that widely used information sources, like social media platforms or search engines, often use algorithms that prioritize user preferences. This can sometimes lead to the propagation of biased or incorrect information.

Study results 

The researchers assessed questions from three distinct categories and found that the results were consistently evaluated without noticeable variance in truthfulness or precision.

When measuring precision, most questions were answered accurately, with the majority of answers rated as either 'excellent' or 'good', receiving an average score of 9 out of 10. In terms of accuracy, the experts found that on average, 85.5% of the answers were spot-on, while 14.5% were deemed 'accurate but with gaps'.

An example of the latter can be seen in the query: "Does COVID-19 vaccination during pregnancy lead to birth defects?" The initial response only addressed mRNA vaccines. However, upon prompting the system for more detail, ChatGPT expanded on its answer without compromising scientific integrity.

Generally, ChatGPT offers information consistent with existing scientific evidence, effectively debunking many myths circulating on social media, which could potentially promote a higher vaccine uptake. The AI provided correct responses not only to widely believed myths but also to points that are considered in clinical guidelines as either false or true contraindications.

To visually represent their findings, the team charted out results of evaluations on three parameters: veracity (categorizing answers as either accurate, accurate with gaps, or wrong), precision (judging the quality of answers ranging from excellent to insufficient), and an overall quality score from 1 (worst) to 10 (best).

For all 50 questions, average values were presented. It is crucial to understand that terms like veracity and precision were subjective, relying on the judgments of the three independent experts. The intention behind this study was to gauge ChatGPT's capability to address myths and misconceptions surrounding vaccine safety.

However, there are caveats to consider: the answers generated by ChatGPT are influenced by how the questions are phrased, and its responses are dynamic, sometimes offering varying answers to the same question in a short span. The AI's interactive nature means that, theoretically, it could be trained to give answers that veer away from the scientific consensus, possibly reinforcing confirmation biases.

The study only analyzed default responses, acknowledging the AI’s potential to generate a wide range of user interactions. Given the continuous evolution of ChatGPT, future versions might vary in their interactions. Some current browser extensions that incorporate ChatGPT even provide users with links to sources, bolstering its credibility.

Conclusions

Recent findings in a Journal of the American Medical Association (JAMA) editorial indicate that ChatGPT, when presented with contentious topics, delivers articulate yet sometimes formulaic and potentially misleading responses.

While there is some agreement with this observation, there is a belief that ChatGPT can effectively inform the general public and guide decision-makers towards scientific evidence. ChatGPT has shown an aptitude for identifying misleading questions about vaccines. Its language remains accessible to laypeople without compromising scientific accuracy. Although it cannot supplant experts or direct scientific evidence, ChatGPT appears to be a trustworthy information resource for the general public.

Source:
Vijay Kumar Malesu

Written by

Vijay Kumar Malesu

Vijay holds a Ph.D. in Biotechnology and possesses a deep passion for microbiology. His academic journey has allowed him to delve deeper into understanding the intricate world of microorganisms. Through his research and studies, he has gained expertise in various aspects of microbiology, which includes microbial genetics, microbial physiology, and microbial ecology. Vijay has six years of scientific research experience at renowned research institutes such as the Indian Council for Agricultural Research and KIIT University. He has worked on diverse projects in microbiology, biopolymers, and drug delivery. His contributions to these areas have provided him with a comprehensive understanding of the subject matter and the ability to tackle complex research challenges.    

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Kumar Malesu, Vijay. (2023, September 06). Using ChatGPT to address vaccine hesitancy: a promising tool for guiding users to scientific information?. News-Medical. Retrieved on October 07, 2024 from https://www.news-medical.net/news/20230906/Using-ChatGPT-to-address-vaccine-hesitancy-a-promising-tool-for-guiding-users-to-scientific-information.aspx.

  • MLA

    Kumar Malesu, Vijay. "Using ChatGPT to address vaccine hesitancy: a promising tool for guiding users to scientific information?". News-Medical. 07 October 2024. <https://www.news-medical.net/news/20230906/Using-ChatGPT-to-address-vaccine-hesitancy-a-promising-tool-for-guiding-users-to-scientific-information.aspx>.

  • Chicago

    Kumar Malesu, Vijay. "Using ChatGPT to address vaccine hesitancy: a promising tool for guiding users to scientific information?". News-Medical. https://www.news-medical.net/news/20230906/Using-ChatGPT-to-address-vaccine-hesitancy-a-promising-tool-for-guiding-users-to-scientific-information.aspx. (accessed October 07, 2024).

  • Harvard

    Kumar Malesu, Vijay. 2023. Using ChatGPT to address vaccine hesitancy: a promising tool for guiding users to scientific information?. News-Medical, viewed 07 October 2024, https://www.news-medical.net/news/20230906/Using-ChatGPT-to-address-vaccine-hesitancy-a-promising-tool-for-guiding-users-to-scientific-information.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Cardiovascular risks post-COVID-19 vaccination: Key findings