The Pros and Cons of Healthcare Chatbots

NewsGuard 100/100 Score

Chatbots are conversation platforms driven by artificial intelligence (AI), that respond to queries based on algorithms. They are considered to be ground-breaking technologies in customer relationships. Since healthcare chatbots can be on duty tirelessly both day and night, they are an invaluable addition to the care of the patient.

Medical chatbots

Medical chatbots. Image Credit: olya osyunina/Shutterstock.com

Advantages

Chatbots can provide a tireless, constant source of interaction for patients with the healthcare system. The anonymity associated with these chats is a source of confidence for patients sharing personal information, especially in the area of mental healthcare.

Other important areas that could safely be entrusted to chatbots include gathering routine data, scheduling appointments, administrative work around admissions and discharges, sending reminders, tracking symptoms, creating medical records, facilitating insurance and payment procedures, and telehealth.

Reduced costs, improved efficiency

Chatbots called virtual assistants or virtual humans can handle the initial contact with patients, asking and answering the routine questions that inevitably come up. During the coronavirus disease 2019 (COVID-19) pandemic, especially, screening for this infection by asking certain questions in a certain predefined order, and thus assessing the risk of COVID-19 could save thousands of manual screenings.

This would save physical resources, manpower, money and effort while accomplishing screening efficiently. The chatbots can make recommendations for care options once the users enter their symptoms.

Answering questions

Medical chatbots are especially useful since they can answer questions that definitely should not be ignored, questions asked by anxious patients or their caregivers, but which do not need highly trained medical professionals to answer. Since such tools avoid the need for patients to come in for an appointment just to have their questions answered, they can prevent wastage of time for both patients and healthcare providers while providing useful information in a timely fashion.

The answers not only have to be correct, but they also need to adequately fulfill the users’ needs and expectations for a good answer.” More importantly, errors in answers from automated systems destroy trust more than errors by humans.  

Improving diagnostic accuracy

One stream of healthcare chatbot development focuses on deriving new knowledge from large datasets, such as scans. This is different from the more traditional image of chatbots that interact with people in real-time, using probabilistic scenarios to give recommendations that improve over time.

Monitoring patients

Besides answering questions related to illness, medications and common occurrences during the course of a chronic condition, chatbots can help evaluate how a patient is doing during follow-up, and schedule an appointment with the physician where further care is required.

Chatbots as healthcare companions

Medical (social) chatbots can interact with patients who are prone to anxiety, depression and loneliness, allowing them to share their emotional issues without fear of being judged, and providing good advice as well as simple company.

Smoothing insurance issues

Chatbots are well equipped to help patients get their healthcare insurance claims approved speedily and without hassle since they have been with the patient throughout the illness. Not only can they recommend the most useful insurance policies for the patient’s medical condition, but they can save time and money by streamlining the process of claiming insurance and simplifying the payment process.

Improving patient satisfaction

By allowing patients to get instant answers at the first point of contact, providing simplified and timely appointments and following up on patients after the visit, healthcare chatbots not only provide satisfaction with the whole healthcare experience, but may allow physicians to spend more time with their patients.

Greater efficiency

Chatbot advocates say that the time and effort saved for healthcare professionals, the more accurate recording and handling of information, the reduced risk of mistakes and the ability to use past and present data to predict the outcome are bound to increase the efficiency of healthcare in public hospitals. Chatbots can be exploited to automate some aspects of clinical decision-making by developing protocols based on data analysis.

Routine diagnostic tasks, online consultations and other virtual assistance can be done by chatbot algorithms, but other factors may be left out that should be included for a reliable outcome.

Virtual healthcare

Virtual healthcare. Image Credit: Visual Generation/Shutterstock.com

Disadvantages

Despite the obvious pros of using healthcare chatbots, they also have major drawbacks.

Increased costs

The development of more reliable algorithms for healthcare chatbots requires programming experts who require payment. Moreover, backup systems must be designed for failsafe operations, involving practices that make it more costly, and which may introduce unexpected problems.

Incomplete assessment

Many healthcare experts feel that chatbots may help with the self-diagnosis of minor illnesses, but the technology is not advanced enough to replace visits with medical professionals. However, collaborative efforts on fitting these applications to more demanding scenarios are underway. Beginning with primary healthcare services, the chatbot industry could gain experience and help develop more reliable solutions.

Still, being unable to take all the personal details associated with the patient may drive chatbots and the experts who rely on them to inaccuracies in their medical practice, raising medical liabilities and the prospect of new ethical issues.

For all their apparent understanding of how a patient feels, they are machines and cannot show empathy. They also cannot assess how different people prefer to talk, whether seriously or lightly, keeping the same tone for all conversations.

Chatbots cannot read body language, which hampers the flow of information. And if there is a short gap in a conversation, the chatbot cannot pick up the thread where it fell, instead having to start all over again. This may not be possible or agreeable for all users, and may be counterproductive for patients with mental illness.

Also, if the chatbot has to answer a flood of questions, it may be confused and start to give garbled answers.

Negative impact on professional skills

The widespread use of chatbots can transform the relationship between healthcare professionals and customers, and may fail to take the process of diagnostic reasoning into account. This process is inherently uncertain, and the diagnosis may evolve over time as new findings present themselves. Hence the need for prudence in making clinical decisions.

What doctors often need is wisdom rather than intelligence, and we are a long way away from a science of artificial wisdom.” Chatbots lack both wisdom and the flexibility to correct their errors and change their decisions.

As chatbots remove diagnostic opportunities from the physician’s field of work, training in diagnosis and patient communication may deteriorate in quality. It is important to note that good physicians are made by sharing knowledge about many different subjects, through discussions with those from other disciplines and by learning to glean data from other processes and fields of knowledge.

Physicians must also be kept in the loop about the possible uncertainties of the chatbot and its diagnoses, such that they can avoid worrying about potential inaccuracies in the outcomes and predictions of the algorithm. This reduces cognitive load and thus improves physician performance.

Failure of trust

Moreover, as patients grow to trust chatbots more, they may lose trust in healthcare professionals. Secondly, placing too much trust in chatbots may potentially expose the user to data hacking. And finally, patients may feel alienated from their primary care physician or self-diagnose once too often.

Such self-diagnosis may become such a routine affair as to hinder the patient from accessing medical care when it is truly necessary, or believing medical professionals when it becomes clear that the self-diagnosis was inaccurate. The level of conversation and rapport-building at this stage for the medical professional to convince the patient could well overwhelm the saving of time and effort at the initial stages.

Business logic rules

Another ethical issue that is often noticed is that the use of technology is frequently overlooked, with mechanical issues being pushed to the front over human interactions. The effects that digitalizing healthcare can have on medical practice are especially concerning, especially on clinical decision-making in complex situations that have moral overtones.

Over-reliance on chatbots may also give the green signal to healthcare companies to follow the bait of market logic, making profits rather than benefiting the patient as the primary outcome, and allowing such companies to dominate healthcare at the cost of ethical function.

Data hacking

Moreover, training is essential for AI to succeed, which entails the collection of new information as new scenarios arise. However, this may involve the passing on of private data, medical or financial, to the chatbot, which stores it somewhere in the digital world. Such data could be hacked, and privacy breaches could occur. This is among the pressing concerns of today.

Lack of accountability

Despite the emergence of a principle-based approach to AI in medical care, it remains true that this lacks the trust-based foundation of a patient-physician relationship, the wisdom of past experience, and dependable mechanisms to ensure legal and medical accountability.

Conclusions

Despite the many difficulties in identifying the complexities of chatbot use in healthcare, efforts must be made to approach this area both ethically and professionally, rather than from the viewpoint of business. “Chatbots have the potential to be integrated into clinical practice by working alongside health practitioners to reduce costs, refine workflow efficiencies, and improve patient outcomes.”

Nonetheless, “insufficient consideration regarding the implementation of chatbots in health care can lead to poor professional practices, creating long-term side effects and harm for professionals and their patients. Whether [the benefits] outweigh the potential risks to both patients and physicians has yet to be seen.”

References

Further Reading

Last Updated: May 4, 2022

Dr. Liji Thomas

Written by

Dr. Liji Thomas

Dr. Liji Thomas is an OB-GYN, who graduated from the Government Medical College, University of Calicut, Kerala, in 2001. Liji practiced as a full-time consultant in obstetrics/gynecology in a private hospital for a few years following her graduation. She has counseled hundreds of patients facing issues from pregnancy-related problems and infertility, and has been in charge of over 2,000 deliveries, striving always to achieve a normal delivery rather than operative.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Thomas, Liji. (2022, May 04). The Pros and Cons of Healthcare Chatbots. News-Medical. Retrieved on April 19, 2024 from https://www.news-medical.net/health/The-Pros-and-Cons-of-Healthcare-Chatbots.aspx.

  • MLA

    Thomas, Liji. "The Pros and Cons of Healthcare Chatbots". News-Medical. 19 April 2024. <https://www.news-medical.net/health/The-Pros-and-Cons-of-Healthcare-Chatbots.aspx>.

  • Chicago

    Thomas, Liji. "The Pros and Cons of Healthcare Chatbots". News-Medical. https://www.news-medical.net/health/The-Pros-and-Cons-of-Healthcare-Chatbots.aspx. (accessed April 19, 2024).

  • Harvard

    Thomas, Liji. 2022. The Pros and Cons of Healthcare Chatbots. News-Medical, viewed 19 April 2024, https://www.news-medical.net/health/The-Pros-and-Cons-of-Healthcare-Chatbots.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Researchers leverage machine-learning techniques to predict future risk of pressure injuries