[Home ] [Archive]    
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
IJRR Information::
For Authors::
For Reviewers::
Subscription::
News & Events::
Web Mail::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
ISSN
Hard Copy 2322-3243
Online 2345-4229
..
Online Submission
Now you can send your articles to IJRR office using the article submission system.
..

AWT IMAGE

AWT IMAGE

:: Volume 20, Issue 1 (1-2022) ::
Int J Radiat Res 2022, 20(1): 121-130 Back to browse issues page
Deep learning-based synthetic CT generation from MR images: comparison of generative adversarial and residual neural networks
F. Gholamiankhah , S. Mostafapour , H. Arabi
Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland , hossein.arabi@unige.ch
Abstract:   (1364 Views)
Background: Currently, MRI-only radiotherapy (RT) eliminates some of the concerns about using CT images in RT chains such as the registration of MR images to a separate CT, extra dose delivery, and the additional cost of repeated imaging. However, one remaining challenge is that the signal intensities of MRI are not related to the attenuation coefficient of the biological tissue. This work compares the performance of two state-of-the-art deep learning models; a generative adversarial network (GAN) and a residual network (ResNet) for synthetic CTs (sCT) generation from MR images. Materials and Methods: The brain MR and CT images of 86 participants were analyzed. GAN and ResNet models were implemented for the generation of synthetic CTs from the 3D T1-weighted MR images using a six-fold cross-validation scheme. The resulting sCTs were compared, considering the CT images as a reference using standard metrics such as the mean absolute error (MAE), peak signal-to-noise-ratio (PSNR) and the structural similarity index (SSIM). Results: Overall, the ResNet model exhibited higher accuracy in relation to the delineation of brain tissues. The ResNet model estimated the CT values for the entire head region with an MAE of 114.1±27.5 HU compared to MAE=-10.9±147.0 HU obtained from the GAN model. Moreover, both models offered comparable SSIM and PSNR values, although the ResNet method exhibited a slightly superior performance over the GAN method. Conclusion: We compared two state-of-the-art deep learning models for the task of MR-based sCT generation. The ResNet model exhibited superior results, thus demonstrating its potential to be used for the challenge of synthetic CT generation in PET/MR AC and MR-only RT planning.
 
Full-Text [PDF 1969 kb]   (1012 Downloads)    
Type of Study: Original Research | Subject: Radiobiology
Send email to the article author

Add your comments about this article
Your username or Email:

CAPTCHA



XML     Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Gholamiankhah F, Mostafapour S, Arabi H. Deep learning-based synthetic CT generation from MR images: comparison of generative adversarial and residual neural networks. Int J Radiat Res 2022; 20 (1) :121-130
URL: http://ijrr.com/article-1-4082-en.html


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Volume 20, Issue 1 (1-2022) Back to browse issues page
International Journal of Radiation Research
Persian site map - English site map - Created in 0.05 seconds with 50 queries by YEKTAWEB 4645