In a recent study published in the Npj Digital Medicine, researchers investigated the factors that hinder or promote healthcare professionals' acceptance of artificial intelligence (AI) in hospitals.
AI is linked to the automation of intelligent human behavior, specifically in terms of exhibiting human-like reasoning and thinking. AI technologies are increasingly used in medical practice, including complex healthcare work environments. With respect to technology, acceptance refers to the internal motivation, willingness, and intention to use technology due to positive attitudes toward the system or technology. AI systems acceptance is comparable to the acceptance of other novel tools.
About the study
A review was conducted on AcceiAI acceptance by healthcare professionals in hospital settings. The literature is examined methodically based on specific eligibility criteria. The next step involved extracting pertinent information from the studies and evaluating their quality. The review concluded by presenting study outcomes and providing recommendations for future studies.
The reviewed articles' findings are presented using the Unified Theory of Acceptance and Use of Technology (UTAUT) as a framework. This theory aimed to elucidate a user's motivation to utilize information technology (IT) systems. The approach relies on different IT acceptance models, including the Technology Acceptance Model (TAM). The UTAUT model comprises four primary components, namely effort expectancy, performance expectancy, social influences, and facilitating conditions. Additionally, there are four regulating factors, including sex, age, experience, and voluntariness of use, which impact the four primary components.
The team conducted a search for relevant studies that aligned with the review's aim and research queries. This study analyzed and examined original research papers published between 2010 and June 2022 that focused on healthcare professionals whose clinical fields of work were impacted by AI. The research included qualitative, quantitative, and mixed methods.
This review focused on studies published in English or German that examined factors related to AI acceptance. The eligible studies included those conducted in hospital settings and those that involved healthcare professionals in the development of AI systems.
The review analyzed a total of 42 articles. Most studies were conducted in Europe, followed by North America and Asia. Additional research was carried out in Africa and Australia. One of the eligible studies was conducted in 25 countries worldwide. The studies conducted included qualitative, quantitative, and mixed-method approaches and involved hospital-based healthcare professionals as participants. The eligible studies also utilized interviews and surveys as means of data collection.
According to three studies on clinical decision support systems (CDSS), participants reported that CDSS implementation in acute hospital settings resulted in a decrease in medical errors using recommendations and warnings. However, in another study on barriers to CDSS adoption, participants noted that emergency care settings experienced errors due to CDSS.
The accuracy estimation of AI-based technologies by healthcare professionals was inconsistent. According to a recent study, 22.5% of radiology department staff believe that AI-based diagnostic tools will surpass radiologists in the near future. Yet, almost 12% of the participants stated that they would use AI for medical decisions in the near future. In another study, 82% of doctors, physiotherapists, and nurses found AI support systems helpful in diagnosing rare or unusual disorders.
Nearly 15% of doctors and nurses who participated in a study expressed their disbelief in the ability of a machine-learning system to detect early-stage delirium. Furthermore, about half of the physicians involved in a study on using AI in ophthalmology expressed concerns about the reliability of the system, citing difficulties in ensuring its quality.
Three studies examined healthcare professionals' attitudes toward CDSS. They found that they had doubts about the accuracy of diagnostic systems and CDSS since they believed the resulting information was insufficient for decision-making. However, in another study, physicians opined that CDSSs are helpful, but their capabilities are restricted.
Physicians believe that AI adoption in healthcare could be hindered by a lack of adaptability and transparency in machine learning systems or CDSS used for diagnostics. Furthermore, study participants noted that a predictive machine learning system's acceptance emphasized the need for evidence-based and comprehensive protocols for such systems. Notably, participants who reported unfamiliarity with a system tended to reject it.
The studies found both positive and negative factors affecting the acceptance of AI in hospitals. Most studies included CDSS as a form of AI. Different perceptions regarding the impact of AI on error incidence, timely resources, and alert sensitivity were found. However, the participants agreed unanimously on the hindering factors of AI integration in clinical workflows, including difficulties in integration and fear of loss of autonomy. Notably, AI training helped improve its acceptance.
The team believes that increasing AI adoption in healthcare, involving end-users in the initial stages of AI development, and providing tailored training and infrastructure are recommended for facilitating AI acceptance.