Deep learning models can improve low-field MRI images for better clinical diagnoses

NewsGuard 100/100 Score

In a recent study published in Scientific Reports, a team of scientists from Australia investigated how their image-to-image translation model LoHiResGAN performed against other models in maintaining diagnostic integrity and retaining important medical information while enhancing the low-field magnetic resonance imaging (MRI) scans and generating synthetic 3 Tesla (3T) or high-field MRI images.

Study: Improving portable low-field MRI image quality through image-to-image translation using paired low- and high-field images. Image Credit: Gorodenkoff/Shutterstock.comStudy: Improving portable low-field MRI image quality through image-to-image translation using paired low- and high-field images. Image Credit: Gorodenkoff/Shutterstock.com

Background

Magnetic resonance imaging has widespread and essential applications in medical science due to its non-invasive nature and the ability to visualize soft tissues and organs in high contrast.

The method combines radiofrequency pulses, strong magnetic fields, and information from computer algorithms to produce images of various body regions, such as the brain, joints, organs, and the vertebral column.

Additionally, since MRI does not use ionizing radiation, the risk of radiation-related complications is also low.

Compared to the high-field MRI scanners used in the clinical setting, low-field or 64 milliTesla (64mT) MRI is economical, compact, and portable.

Furthermore, despite the low signal-to-noise ratio, low-field MRI has many applications, such as neuroimaging or visualization of the musculoskeletal system, especially in emergency settings in economically challenged or remote regions.

Recent research has focused on developing deep-learning-based models to translate low-field 64mT MRI scans to high-field 3T images.

About the study

In the present study, the researchers used a paired dataset consisting of 64mT and 3T scans from T1 weighted MRI that enhances signals from fatty tissue and T2 weighted MRI where the signal from water is enhanced, to compare the performance of the model LoHiResGAN against other image-to-image translation models such as CycleGAN, GANs, cGAN, and U-Net.

The study enrolled 92 healthy participants scanned using 3T and 64mT MRI systems. Brain scans were obtained, and morphometric measurements of 33 brain regions were compared across images from 64mT, 3T, and synthetic 3T scans obtained from the models.

Various factors were considered while selecting the imaging sequences. For the 3T MRI sequences, the researchers selected a two-dimensional T2-weighted turbo spin echo (TSE) sequence that enabled efficient scanning while reducing patient discomfort.

Furthermore, given the widespread use of this method in the clinical setting, the researchers also ensured that their results had immediate relevance in the clinical field.

A tool for linear image resolution was then used to co-register the 3T and 64mT scans to prepare the training data for the deep-learning model. The final dataset was randomly distributed into three groups — training, validation, and testing — to ensure that different datasets were used for the three processes.

The model proposed in the present study, LoHiResGAN, uses a Residual Neural Network (ResNet) component in a Generative Adversarial Networks (GAN) architecture.

To evaluate whether the ResNet components were effective in the LoHiResGAN model, its performance was compared against that of models without ResNet components.

Quantitative metrics such as structural similarity index measure, normalized root-mean-squared error, perception-based image quality evaluator, and peak signal-to-noise ratio were incorporated in the analysis to compare the performance of the various image-to-image translation models.

Results

The results showed that the synthetic 3T images obtained using LoHiResGAN were significantly better in image quality than those obtained using other models such as CycleGAN, GANs, cGAN, and U-Net.

Furthermore, the brain morphometry measurements obtained using LoHiResGAN were more consistent across various brain regions with reference to the 3T MRI scans than the other models.

While all the image-to-image translation models achieved better signal-to-noise ratio and structural similarity index measures than the low-field 64mT scans, the GAN-based models, such as LoHiResGAN, performed better than the U-Net model in including the quantitative metrics. These findings highlighted the potential use of GAN-based models in improving low-quality MRI scans.

While low-field MRI scans have several logistical advantages, 64mT scans can also present discrepancies affecting the clinical diagnosis. For example, diagnosing certain conditions such as hydrocephalus critically depends on the precise estimation of brain morphometric measurements.

The researchers also discussed some of the shortcomings of deep-learning-based models, such as inconsistencies in accurately labeling the white and grey matter in the brain, indicating potential areas for improvement.

Conclusions

Overall, the findings suggested that the image-to-image translation model LoHiResGAN significantly improved the image quality of low-field 64mT MRI sequences while being consistent in morphometric measurements across various brain regions.

The study highlights the potential use of these models in improving the scope of clinical diagnoses in areas without high-field MRI scans.  

Journal reference:
Dr. Chinta Sidharthan

Written by

Dr. Chinta Sidharthan

Chinta Sidharthan is a writer based in Bangalore, India. Her academic background is in evolutionary biology and genetics, and she has extensive experience in scientific research, teaching, science writing, and herpetology. Chinta holds a Ph.D. in evolutionary biology from the Indian Institute of Science and is passionate about science education, writing, animals, wildlife, and conservation. For her doctoral research, she explored the origins and diversification of blindsnakes in India, as a part of which she did extensive fieldwork in the jungles of southern India. She has received the Canadian Governor General’s bronze medal and Bangalore University gold medal for academic excellence and published her research in high-impact journals.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Sidharthan, Chinta. (2023, December 05). Deep learning models can improve low-field MRI images for better clinical diagnoses. News-Medical. Retrieved on April 28, 2024 from https://www.news-medical.net/news/20231205/Deep-learning-models-can-improve-low-field-MRI-images-for-better-clinical-diagnoses.aspx.

  • MLA

    Sidharthan, Chinta. "Deep learning models can improve low-field MRI images for better clinical diagnoses". News-Medical. 28 April 2024. <https://www.news-medical.net/news/20231205/Deep-learning-models-can-improve-low-field-MRI-images-for-better-clinical-diagnoses.aspx>.

  • Chicago

    Sidharthan, Chinta. "Deep learning models can improve low-field MRI images for better clinical diagnoses". News-Medical. https://www.news-medical.net/news/20231205/Deep-learning-models-can-improve-low-field-MRI-images-for-better-clinical-diagnoses.aspx. (accessed April 28, 2024).

  • Harvard

    Sidharthan, Chinta. 2023. Deep learning models can improve low-field MRI images for better clinical diagnoses. News-Medical, viewed 28 April 2024, https://www.news-medical.net/news/20231205/Deep-learning-models-can-improve-low-field-MRI-images-for-better-clinical-diagnoses.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New AI tool 'TORCH' successfully identifies cancer origins in unknown primary cases