In recent years several different Image Quality Metrics (IQMs) have been introduced which are focused on comparing the feature maps extracted from different pre-trained deep learning models[1-3]. While such objective IQMs have shown a high correlation with the subjective scores little attention has been paid on how they could be used to better understand the Human Visual System (HVS) and how observers evaluate the quality of images. In this study, by using different pre-trained Convolutional Neural Networks (CNN) models we identify the most relevant features in Image Quality Assessment (IQA). By visualizing these feature maps we try to have a better understanding about which features play a dominant role when evaluating the quality of images. Experimental results on four benchmark datasets show that the most important feature maps represent repeated textures such as stripes or checkers, and feature maps linked to colors blue, or orange also play a crucial role. Additionally, when it comes to calculating the quality of an image based on a comparison of different feature maps, a higher accuracy can be reached when only the most relevant feature maps are used in calculating the image quality instead of using all the extracted feature maps from a CNN model. [1] Amirshahi, Seyed Ali, Marius Pedersen, and Stella X. Yu. "Image quality assessment by comparing CNN features between images." Journal of Imaging Science and Technology 60.6 (2016): 60410-1. [2] Amirshahi, Seyed Ali, Marius Pedersen, and Azeddine Beghdadi. "Reviving traditional image quality metrics using CNNs." Color and imaging conference. Vol. 2018. No. 1. Society for Imaging Science and Technology, 2018. [3] Gao, Fei, et al. "Deepsim: Deep similarity for image quality assessment." Neurocomputing 257 (2017): 104-114.
This paper investigates camera phone image quality, namely the effect of sensor megapixel (MP) resolution on the perceived quality of images displayed at full size on high-quality desktop displays. For the purpose, we use images from simulated cameras with different sensor MP resolutions. We employ methods recommended in the IEEE 1858 Camera Phone Image Quality (CPIQ) standard, as well as other established psychophysical paradigms, to obtain subjective image quality ratings for systems with varying MP resolution from large numbers of observers. These are subsequently used to validate image quality metrics (IQMs) relating to sharpness and resolution, including those from the CPIQ standard. Further, we define acceptable levels of quality - when changing MP resolution - for mobile phone images in Subjective Quality Scale (SQS) units. Finally, we map SQS levels to categories obtained from star-rating experiments (commonly used to rate consumer experience). Our findings draw a relationship between the MP resolution of the camera sensor and the LCD device. The chosen metrics predict quality accurately, but only the metrics proposed by CPIQ return results in calibrated JNDs in quality. We close by discussing the appropriateness of star-rating experiments for the purpose of measuring subjective image quality and metric validation.