AI model combines imaging and patient data to improve chest X-ray diagnosis

NewsGuard 100/100 Score

A new artificial intelligence (AI) model combines imaging information with clinical patient data to improve diagnostic performance on chest X-rays, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).

Clinicians consider both imaging and non-imaging data when diagnosing diseases. However, current AI-based approaches are tailored to solve tasks with only one type of data at a time.

Transformer-based neural networks, a relatively new class of AI models, have the ability to combine imaging and non-imaging data for a more accurate diagnosis. These transformer models were initially developed for the computer processing of human language. They have since fueled large language models like ChatGPT and Google's AI chat service, Bard.

Unlike convolutional neural networks, which are tuned to process imaging data, transformer models form a more general type of neural network. They rely on a so-called attention mechanism, which allows the neural network to learn about relationships in its input."

Firas Khader, M.Sc., study lead author, Ph.D. student in the Department of Diagnostic and Interventional Radiology at University Hospital Aachen in Aachen, Germany

This capability is ideal for medicine, where multiple variables like patient data and imaging findings are often integrated into the diagnosis.

Khader and colleagues developed a transformer model tailored for medical use. They trained it on imaging and non-imaging patient data from two databases containing information from a combined total of more than 82,000 patients.

The researchers trained the model to diagnose up to 25 conditions using non-imaging data, imaging data, or a combination of both, referred to as multimodal data.

Compared to the other models, the multimodal model showed improved diagnostic performance for all conditions.

The model has potential as an aid to clinicians in a time of growing workloads.

"With patient data volumes increasing steadily over the years and time that the doctors can spend per patient being limited, it might become increasingly challenging for clinicians to interpret all available information effectively," Khader said. "Multimodal models hold the promise to assist clinicians in their diagnosis by facilitating the aggregation of the available data into an accurate diagnosis."

The proposed model could serve as a blueprint for seamlessly integrating large data volumes, Khader said.

"Multimodal Deep Learning for Integrating Chest Radiographs and Clinical Parameters - A Case for Transformers." Collaborating with Dr. Khader were Gustav Müller-Franzes, M.Sc., Tianci Wang, B.Sc., Tianyu Han, M.Sc., Soroosh Tayebi Arasteh, M.Sc., Christoph Haarburger, Ph.D., Johannes Stegmaier, Ph.D., Keno Bressem, M.D., Christiane Kuhl, M.D., Sven Nebelung, M.D., Jakob Nikolas Kather, M.D., and Daniel Truhn, M.D., Ph.D.

Source:
Journal reference:

Khader, F., et al. (2023) Multimodal Deep Learning for Integrating Chest Radiographs and Clinical Parameters - A Case for Transformers. Radiology. doi.org/10.1148/radiol.230806.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Increased emotional sensitivity linked to previous COVID-19 infection, new research suggests