Ketola, Juuso H. J.; Heino, Helinä; Juntunen, Mikael A. K.; Nieminen, Miika T.; Siltanen, Samuli; Inkinen, Satu I.
(2021)
In interior computed tomography (CT), the x-ray beam is collimated to a limited field-of-view (FOV) (e.g. the volume of the heart) to decrease exposure to adjacent organs, but the resulting image has a severe truncation artifact when reconstructed with traditional filtered back-projection (FBP) type algorithms. In some examinations, such as cardiac or dentomaxillofacial imaging, interior CT could be used to achieve further dose reductions. In this work, we describe a deep learning (DL) method to obtain artifact-free images from interior CT angiography. Our method employs the Pix2Pix generative adversarial network (GAN) in a two-stage process: (1) An extended sinogram is computed from a truncated sinogram with one GAN model, and (2) the FBP reconstruction obtained from that extended sinogram is used as an input to another GAN model that improves the quality of the interior reconstruction. Our double GAN (DGAN) model was trained with 10 000 truncated sinograms simulated from real computed tomography angiography slice images. Truncated sinograms (input) were used with original slice images (target) in training to yield an improved reconstruction (output). DGAN performance was compared with the adaptive de-truncation method, total variation regularization, and two reference DL methods: FBPConvNet, and U-Net-based sinogram extension (ES-UNet). Our DGAN method and ES-UNet yielded the best root-mean-squared error (RMSE) (0.03 +/- 0.01), and structural similarity index (SSIM) (0.92 +/- 0.02) values, and reference DL methods also yielded good results. Furthermore, we performed an extended FOV analysis by increasing the reconstruction area by 10% and 20%. In both cases, the DGAN approach yielded best results at RMSE (0.03 +/- 0.01 and 0.04 +/- 0.01 for the 10% and 20% cases, respectively), peak signal-to-noise ratio (PSNR) (30.5 +/- 2.6 dB and 28.6 +/- 2.6 dB), and SSIM (0.90 +/- 0.02 and 0.87 +/- 0.02). In conclusion, our method was able to not only reconstruct the interior region with improved image quality, but also extend the reconstructed FOV by 20%.