Dual perspective AI model achieves high accuracy in early lung cancer diagnosis

Lung cancer remains the leading cause of cancer-related deaths worldwide, accounting for nearly one in five cancer deaths - around 1.8 million lives lost each year. One of the main reasons is late diagnosis: in its early stages, the disease appears as extremely small nodules that are difficult to distinguish from healthy tissue, even for experienced radiologists.

For doctors, this means constantly balancing between what is visible and what might be missed. Even subtle differences in a scan can determine whether cancer is detected early or overlooked entirely.
Researchers are now exploring how artificial intelligence (AI) could help solve this challenge by giving doctors a more reliable way to analyse complex medical images.

Seeing both detail and context at the same time

To improve lung cancer detection, researchers developed a system that learns to analyse computed tomography (CT) scans in a way that closely resembles how doctors work - but without the need to switch between perspectives.

One part of the model focuses on small details, such as tiny spots or textures in the lungs, while another looks at the overall image and understands the bigger context."

Inzamam Mashood Nasir, Kaunas University of Technology (KTU) researcher, one of the system's developers

This dual approach addresses a key limitation in existing systems, which often capture only part of the information - either fine details or the overall structure, but not both at the same time.

In practice, a radiologist constantly shifts between these two views - zooming in on suspicious areas and then stepping back to understand how they relate to the entire lung. The AI system, however, performs both tasks simultaneously.

"You can think of it as having a magnifying glass and a full view of the scan at the same time," Nasir explains.

The model was trained using CT scans from both healthy individuals and cancer patients, learning to recognise patterns that distinguish between normal, benign, and malignant cases.

The results show a clear performance improvement. The system achieved an accuracy of over 96 per cent, outperforming existing approaches and maintaining stable performance across different tests. "This level of advancement is important, especially in medical applications where even small differences can have serious consequences," notes KTU PhD student.

Applicable beyond lung cancer - including brain tumors and breast cancer

In clinical practice, this system could change how lung cancer is diagnosed.

"This is about supporting clinicians. The system provides a second opinion and helps ensure that important details are not overlooked and reduces the time needed per patient, particularly in high-workload environments," emphasises KTU researcher.

For patients, the impact is even more significant. Lung cancer is often diagnosed late, when treatment options are limited. Earlier detection can dramatically increase survival rates. "Early diagnosis means treatment can start sooner, and outcomes are generally much better," says Nasir.

The system is designed to improve both sides of the problem - reducing missed cases while also lowering the number of false alarms that can lead to unnecessary stress and procedures.

However, researchers note that the current model was trained on a relatively limited dataset and still needs to be tested on larger, more diverse patient groups. "In real-world conditions, there are many variables - different scanners, imaging protocols, and patient populations, so we need to ensure the system performs reliably across all of them," explains Nasir.

Future steps include clinical validation, testing in hospital environments, and integration into existing medical systems.

Looking ahead, the same approach could be applied beyond lung cancer. "Any medical imaging task that requires both detailed analysis and understanding of the bigger picture could benefit from this type of model," says Nasir, pointing to areas such as brain tumors, breast cancer, and eye diseases.

Source:
Journal reference:

Yousafzai, S. N., et al. (2026). A hybrid deep learning approach integrating CNN and transformer for lung cancer classification using CT scans. Scientific Reports. DOI: 10.1038/s41598-026-41161-7. https://www.nature.com/articles/s41598-026-41161-7

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI helps identify childhood cancer survivors needing extra support