The perceptual process of images is hierarchical. Human tends to first perceive global structural information such as shapes of objects and further focus on local regional details such as texture. Furthermore, it is widely believed that structure information plays the most important role in task of utility assessment and quality assessment, especially in new scenarios like free-viewpoint television, where the synthesized views contain geometric distortion around objects. We thus hypothesize that the degradation of structural information in an image is more annoying for human observers than the one of the textures in certain application scenarios. In order to confirm our hypothesis, a bilateral filtering based model (BF-M) is proposed referring to a recent subjective perceptual test. In the proposed model, bilateral filters are first utilized to separate structure from the texture information in images. Afterward, features that capture object properties and features that reflect texture information were extracted from the response and the residual of bilateral filtering separately. A contour, a shape related and a texture based estimator are then proposed with the corresponding extracted features. Finally, the model is designed by leveraging the three estimators according to target tasks. With the task-based model, one can then investigate the role of structure/texture information in certain task by checking the correspondence optimized weights assigned to the estimators. In this paper, the hypothesis and the performance of the BF-M is verified on CU-Nantes database as utility estimator and on SynTEX, IRCCyN/IVC-DIBR databases as quality estimator. Experimental results show that (1) structure information does play greater role in several tasks; (2) the performance of the BF-M is comparable to the state-of-the art utility metrics as well as the quality metrics designed for texture synthesis and views synthesis. It is thus validated that the proposed model can also be applied as a task-based parametric image metric.
In this paper, we propose a new no-reference image quality assessment (NR-IQA). The method makes use of local binary patterns (LBP) to label local textures of an image. These labels form a LBP map that can be used to measure the characteristics of image textures (texture map). Then, we compute the histogram of the texture map and weight each LBP label according to its saliency, which is obtained with a visual attention computational model. The weighted histogram is used as input to a regression method that estimates the quality of the image. Experimental results show that the proposed method achieves competitive prediction accuracy and outperforms other state-of-the-art NR-IQA methods. At the same time, the method is simple and reliable, demanding few computational resources, such as memory and processing time.
Blur is one of the most encountered visual distortions in images. It can be either deliberately introduced to highlight some objects, or caused by acquisition/processing. Both cases usually induce spatially-varying blur or out-of-focus blur. Despite its wide occurrence, only a few dedicated image quality metrics can be found in the literature. Most of the proposed metrics are based on the assumption of uniformly blurred images. Consequently, in this paper, we propose a quality assessment framework handling both types of blur and predicting their inherent level of annoyance. To achieve this aim, a local perceptual blurriness map providing the level of blur at each location in an image is first generated. Then, depth ordering is obtained from the image in order to characterize the placement of the image objects in the scene. Next, the visual saliency information is computed to take into account the visual importance of each object. Finally, the local perceptual blurriness map is weighted using both objects depth ordering and saliency maps to provide final scores of blur. Experimental results show that the proposed metric achieves good prediction performance compared to state-of-the-art metrics.
We evaluate improvements to image utility assessment algorithms with the inclusion of saliency information, as well as the saliency prediction performance of three saliency models based on successful utility estimators. Fourteen saliency models were incorporated into several utility estimation algorithms, resulting in significantly improved performance in some cases, with RMSE reductions of between 3 and 25%. Algorithms designed for utility estimation benefit less from the addition of saliency information than those originally designed for quality estimation, suggesting that estimators designed to measure utility also measure some degree of saliency information, and that saliency is important for utility estimation. To test this hypothesis, three saliency models are created from NICE and MS-DGU utility estimators by convolving logical maps of image contours with a Gaussian function. The performance of these utility-based models reveals that highlyperforming utility estimation algorithms can also predict saliency to an extent, reaching approximately 77% of the prediction performance of state-of-the-art saliency models when evaluated on two common saliency datasets.
This article proposes a new no-reference image quality assessment method that is able to blindly predict the quality of an image. The method is based on a machine learning technique that uses texture descriptors. In the proposed method, texture features are computed by decomposing images into texture information using multiscale local binary pattern (MLBP) operators. In particular, the parameters of local binary pattern operators are varied, which generates MLBP operators. The features used for training the prediction algorithm are the histograms of these MLBP channels. The results show that, when compared with other state-of-the-art no-reference methods, the proposed method is competitive in terms of prediction precision and computational complexity. © 2016 Society for Imaging Science and Technology.
Image quality assessment (IQA) has been important issue in image processing. While using subjective quality assessment for image processing algorithms is suitable, it is hard to get subjective quality because of time and money. A lot of objective quality assessment algorithms are used widely as a substitution. Objective quality assessment divided into three types based on existence of reference image : full-reference, reduced-reference, and no-reference IQA. No-reference IQA is more difficult than fullreference IQA because it does not have any reference image. In this paper, we propose a novel no-reference IQA algorithm to measures contrast of image. The proposed algorithm is based on just-noticeable-difference which utilizes the human visual system (HVS). Experimental results show the proposed method performs better than conventional no-reference IQAs.
Due to the massive popularity of digital images and videos over the past several decades, the need for automated quality assessment (QA) is greater than ever. Accordingly, the impetus on QA research has focused on improving prediction accuracy. However, for many application areas, such as consumer electronics, the runtime performance and related computational considerations are equally as important as the accuracy. Most modern QA algorithms exhibit a large computational complexity. However, the large complexity of these algorithms does not necessarily prohibit their ability of achieving low runtimes if hardware resources are used appropriately. GPUs, which offer a large amount of parallelism and a specialized memory hierarchy, should be well-suited for QA algorithm deployment. In this paper, we analyze a massively parallel GPU implementation of the most apparent distortion (MAD) full-reference image QA algorithm with optimizations guided by a microarchitectural analysis. A shared memory based implementation of the local statistics computation has yielded 25% speedup over its original implementation. We describe the optimizations that produce the best results. We also justify our optimization recommendations with descriptions of the microarchitectural underpinnings. Although our study focuses on a single algorithm, the image-processing primitives used in this algorithm are fundamentally similar to those used in most modern QA algorithms.
This research project was undertaken to develop a procedure for evaluating perceptually-based pictorial image quality for smartphone camera captures. Tone quality, color quality, and sharpness and noise were evaluated in separate experiments. In each test, observers scaled overall quality and then the individual image quality characteristic of test images from a variety of smartphone cameras relative to an anchor image. Results were reported in 2016 on the individual image quality characteristics relative to overall quality as well as on the development of objective measurements that correlate with the visual ratings for sharpness and noise. (Farnand, et al., 2016) In this work, the visual ratings for color quality were assessed relative to objective measurements with the results from this analysis indicating that high correlations between the two can be achieved. The perceptual results correlated best with colorimetric information taken from the test images, rather than images of test charts that were captured under lab conditions, which did not necessarily contain colors representative of the colors important to the scenes. Results also indicated that contrast information was needed in addition to the colorimetric information in order to achieve high correlation between subjective and objective information for all scenes. Further, results for a beach scene suggest that sand may serve as a useful memory color for predicting device capture perceptual performance.
The dead leaves image model is often used for measurement of the spatial frequency response (SFR) of digital cameras, where response to fine texture is of interest. It has a power spectral density (PSD) similar to natural images and image features of varying sizes, making it useful for measuring the texture-blurring effects of non-linear noise reduction which may not be well analyzed by traditional methods. The standard approach for analyzing images of this model is to compare observed PSDs to the analytically known one. However, recent works have proposed a cross-correlation based approach which promises more robust measurements via full-reference comparison with the known true pattern. A major assumption of this method is that the observed image and reference image can be aligned (registered) with subpixel accuracy. In this paper we study the effects of registration errors on the calculation of texture-based SFR and its derivative metrics (such as MTF50), in order to determine how accurate this registration must be for reliable results. We also propose a change to the dead leaves cross-correlation algorithm, recommending the use of the absolute value of the transfer function rather than its real part. Simulations of registration error on both real and simulated observed images reveal that small amounts of misregistration (as low as 0.15px) can cause large variability in MTF curves derived using the real part of the transfer function, while MTF curves derived from the absolute value are significantly less affected.
The measurement and assessment should be accompanied on any type of electronic displays. Especially, quantitative analysis is indispensable as the reference to catch the direction to proceed. In this paper, we will present the approach for quantification of the holographic display images. To do it, we achieved the holographic display system with spatial light modulator and Fourier lens, and adopted indices needed for evaluation, such as contrast ratio, cross talk, color dispersion, and uniformity. These indices have been generally employed in the field of classical 2D display and multi-view 3D display. However, there have been almost no tries to adopt them in that of holographic 3D display system due to the absence of concrete methodology up to now. We suggested a standard image, and identified that measured numbers could be used to select the better way of generating the holographic image. We believe that this quantitative approach for assessment of holographic images will help more accurate and systematic development in that field.