More AI explanations can reduce accuracy in cancer diagnosis

In recent years AI has emerged as a powerful tool for analyzing medical images. Thanks to advances in computing and large medical datasets from which AI can learn, it has proven to be a valuable aid in reading and analyzing patterns in X-rays, MRIs and CT scans, enabling doctors to make better and faster decisions, particularly in the treatment and diagnosis of life-threatening diseases like cancer. In certain settings, these AI tools even offer advantages over their human counterparts. 

"AI systems can process thousands of images quickly and provide predictions much faster than human reviewers," says Onur Asan, Associate Professor at Stevens Institute of Technology, whose research focuses on human-computer interaction in healthcare. "Unlike humans, AI does not get tired or lose focus over time." 

Yet many clinicians view AI with at least some degree of distrust, largely because they don't know how it arrives at its predictions, an issue known as the "black box" problem. "When clinicians don't know how AI generates its predictions, they are less likely to trust it," says Asan. "So we wanted to find out whether providing extra explanations may help clinicians, and how different degrees of AI explainability influence diagnostic accuracy, as well as trust in the system." 

Working together with his PhD student Olya Rezaeian and Assistant Professor Alparslan Emrah Bayrak at Lehigh University, Asan conducted a study of 28 oncologists and radiologists who used AI to analyze breast cancer images. The clinicians were also provided with various levels of explanations for the AI tool's assessments. At the end, participants answered a series of questions designed to gauge their confidence in the AI-generated assessment and how difficult the task was.

The team found that AI did improve diagnostic accuracy for clinicians over the control group, but there were some interesting caveats. 

The study revealed that providing more in-depth explanations didn't necessarily produce more trust. "We found that more explainability doesn't equal more trust," says Asan. That's because adding extra or more complex explanations requires clinicians to process additional info, taking their time and focus away from analyzing the images. When explanations were more elaborate, clinicians took longer to make decisions, which decreased their overall performance. 

"Processing more information adds more cognitive workload to clinicians. It also makes them more likely to make mistakes and possibly harm the patient," Asan explains. "You don't want to add cognitive load to the users by adding more tasks."

Asan's research also found that in some cases clinicians trusted the AI too much, which could lead to overlooking crucial information on images, and lead to patient harm. "If an AI system is not designed well and makes some errors while users have high confidence in it, some clinicians may develop a blind trust believing that whatever the AI is suggesting is true, and not scrutinize the results enough," says Asan. 

The team outlined their findings in two recent studies: The impact of AI explanations on clinicians' trust and diagnostic accuracy in breast cancer, published in the journal of Applied Ergonomics on November 1, 2025, and Explainability and AI Confidence in Clinical Decision Support Systems: Effects on Trust, Diagnostic Performance, and Cognitive Load in Breast Cancer Care, published in the International Journal of Human–Computer Interaction on August 7, 2025.

Asan believes that AI will continue to be a helpful assistant to clinicians in interpreting medical imaging, but such systems must be built thoughtfully. "Our findings suggest that designers should exercise caution when building explanations into the AI systems," he says, so that they don't become too cumbersome to use. Plus, he adds, proper training will be needed for the users, as human oversight will still be necessary. "Clinicians who use AI should receive training that emphasizes interpreting the AI outputs and not just trusting it." 

Ultimately, there should be a good balance between the ease of use and utility of the AI systems, Asan notes. "Research finds that there are two main parameters for a person to use any form of technology - perceived usefulness and perceived ease of use," he says. "So if doctors will think that this tool is useful for doing their job, and it's easy to use, they are going to use it."

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Integrating AI into colon cancer diagnosis improves the speed and accuracy of detection