Finding an objective image quality metric that matches the subjective quality has always been a challenging task. We propose a new full reference image quality metric based on features extracted from Convolutional Neural Networks (CNNs). Using a pre-trained AlexNet model, we extract feature maps of the test and reference images at multiple layers, and compare their feature similarity at each layer. Such similarity scores are then pooled across layers to obtain an overall quality value. Experimental results on four state-of-the-art databases show that our metric is either on par or outperforms 10 other state-of-the-art metrics, demonstrating that CNN features at multiple levels are superior to handcrafted features used in most image quality metrics in capturing aspects that matter for discriminative perception. © 2016 Society for Imaging Science and Technology.
Seyed Ali Amirshahi, Marius Pedersen, Stella X. Yu, "Image Quality Assessment by Comparing CNN Features between Images" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Image Quality and System Performance XIV, 2017, pp 42 - 51, https://doi.org/10.2352/ISSN.2470-1173.2017.12.IQSP-225