The high-resolution magnetic resonance image (MRI) provides detailed anatomical information critical for clinical application diagnosis. However, high-resolution MRI typically comes at the cost of long scan time, small spatial coverage, and low signal-to-noise ratio. The benefits of the convolutional neural network (CNN) can be applied to solve the super-resolution task to recover high-resolution generic images from low-resolution inputs. Additionally, recent studies have shown the potential to use the generative advertising network (GAN) to generate high-quality super-resolution MRIs using learned image priors. Moreover, existing approaches require paired MRI images as training data, which is difficult to obtain with existing datasets when the alignment between high and low-resolution images has to be implemented manually.This paper implements two different GAN-based models to handle the super-resolution: Enhanced super-resolution GAN (ESRGAN) and CycleGAN. Different from the generic model, the architecture of CycleGAN is modified to solve the super-resolution on unpaired MRI data, and the ESRGAN is implemented as a reference to compare GAN-based methods performance. The results of GAN-based models provide generated high-resolution images with rich textures compared to the ground-truth. Moreover, results from experiments are performed on both 3T and 7T MRI images in recovering different scales of resolution.
Huy Do, Pascal Bourdon, David Helbert, Mathieu Naudin, Remy Guillevin, "7T MRI super-resolution with Generative Adversarial Network" in Proc. IS&T Int’l. Symp. on Electronic Imaging: 3D Imaging and Applications, 2021, pp 106-1 - 106-7, https://doi.org/10.2352/ISSN.2470-1173.2021.18.3DIA-106