Generative adversarial networks improve interior computed tomography angiography reconstruction

Show full item record



Ketola , J H J , Heino , H , Juntunen , M A K , Nieminen , M T , Siltanen , S & Inkinen , S I 2021 , ' Generative adversarial networks improve interior computed tomography angiography reconstruction ' , Biomedical physics & engineering express , vol. 7 , no. 6 , 065041 .

Title: Generative adversarial networks improve interior computed tomography angiography reconstruction
Author: Ketola, Juuso H. J.; Heino, Helinä; Juntunen, Mikael A. K.; Nieminen, Miika T.; Siltanen, Samuli; Inkinen, Satu I.
Contributor organization: Department of Mathematics and Statistics
The Academic Outreach Network
Inverse Problems
Mikko Samuli Siltanen / Principal Investigator
Date: 2021-11
Language: eng
Number of pages: 13
Belongs to series: Biomedical physics & engineering express
ISSN: 2057-1976
Abstract: In interior computed tomography (CT), the x-ray beam is collimated to a limited field-of-view (FOV) (e.g. the volume of the heart) to decrease exposure to adjacent organs, but the resulting image has a severe truncation artifact when reconstructed with traditional filtered back-projection (FBP) type algorithms. In some examinations, such as cardiac or dentomaxillofacial imaging, interior CT could be used to achieve further dose reductions. In this work, we describe a deep learning (DL) method to obtain artifact-free images from interior CT angiography. Our method employs the Pix2Pix generative adversarial network (GAN) in a two-stage process: (1) An extended sinogram is computed from a truncated sinogram with one GAN model, and (2) the FBP reconstruction obtained from that extended sinogram is used as an input to another GAN model that improves the quality of the interior reconstruction. Our double GAN (DGAN) model was trained with 10 000 truncated sinograms simulated from real computed tomography angiography slice images. Truncated sinograms (input) were used with original slice images (target) in training to yield an improved reconstruction (output). DGAN performance was compared with the adaptive de-truncation method, total variation regularization, and two reference DL methods: FBPConvNet, and U-Net-based sinogram extension (ES-UNet). Our DGAN method and ES-UNet yielded the best root-mean-squared error (RMSE) (0.03 +/- 0.01), and structural similarity index (SSIM) (0.92 +/- 0.02) values, and reference DL methods also yielded good results. Furthermore, we performed an extended FOV analysis by increasing the reconstruction area by 10% and 20%. In both cases, the DGAN approach yielded best results at RMSE (0.03 +/- 0.01 and 0.04 +/- 0.01 for the 10% and 20% cases, respectively), peak signal-to-noise ratio (PSNR) (30.5 +/- 2.6 dB and 28.6 +/- 2.6 dB), and SSIM (0.90 +/- 0.02 and 0.87 +/- 0.02). In conclusion, our method was able to not only reconstruct the interior region with improved image quality, but also extend the reconstructed FOV by 20%.
Subject: 3111 Biomedicine
111 Mathematics
computed tomography
convolutional neural networks
generative adversarial networks
image reconstruction
interior tomography
sinogram extension
Peer reviewed: Yes
Rights: cc_by
Usage restriction: openAccess
Self-archived version: publishedVersion

Files in this item

Total number of downloads: Loading...

Files Size Format View
Ketola_2021_Bio ... _Eng._Express_7_065041.pdf 2.023Mb PDF View/Open

This item appears in the following Collection(s)

Show full item record