Traditional quality estimators evaluate an image's resemblance to a reference image. However, quality estimators are not well suited to the similar but somewhat different task of utility estimation, where an image is judged instead by how useful it would be in terms of extracting information about image content. While full-reference image utility metrics have been developed which outperform quality estimators for the utility-prediction task, assuming the existence of a high-quality reference image is not always realistic. The Oxford Visual Geometry Group's (VGG) deep convolutional neural network (CNN) [1], designed for object recognition, is modified and adapted to the task of utility estimation. This network achieves no-reference utility estimation performance near the full-reference state of the art, with a Pearson correlation of 0.946 with subjective utility scores of the CU-Nantes database and root mean square error of 12.3. Other noreference techniques adapted from the quality domain yield inferior performance. The CNN also generalizes better to distortion types outside of the training set, and is easily updated to include new types of distortion. Early stages of the network apply transformations similar to those of previously developed full-reference utility estimation algorithms.
Image quality assessment (IQA) has been important issue in image processing. While using subjective quality assessment for image processing algorithms is suitable, it is hard to get subjective quality because of time and money. A lot of objective quality assessment algorithms are used widely as a substitution. Objective quality assessment divided into three types based on existence of reference image : full-reference, reduced-reference, and no-reference IQA. No-reference IQA is more difficult than fullreference IQA because it does not have any reference image. In this paper, we propose a novel no-reference IQA algorithm to measures contrast of image. The proposed algorithm is based on just-noticeable-difference which utilizes the human visual system (HVS). Experimental results show the proposed method performs better than conventional no-reference IQAs.