After the sound, 2D images and videos, 3D models represented by polygonal meshes are the actual emergent content due to the technological advance in terms of 3D acquisition [1]. 3D meshes can be subject to several degradations due to acquisition, compression, pre-treatment or transmission that distort the 3D mesh and therefore affect its visual rendering. Because the human observer is generally located at the end of this line, quality assessment of the content is required. We propose in this paper a viewindependent 3D Blind Mesh Quality Assessment Index (BMQI) based on the estimation of visual saliency and roughness. Given a 3D distorted mesh, the metric can assess the percived visual quality without the need of the reference content as humans do. No assumption on the degradation to evaluate is required for this metric, which makes it powerful and usable in any context requiring quality assessment of 3D meshes. Obtained results in terms of correlation with subjective human scores of quality are important and highly competitive with existing full-reference quality assessment metrics.
In this paper we present multiple methods to augment a graph-based foreground detection scheme which uses the smallest nonzero eigenvector to compute the saliency scores in the image. First, we present an augmented background prior to improve the foreground segmentation results. Furthermore, we present and demonstrate three complementary methods, which allow for detection of the foregrounds containing multiple subjects. The first method performs an iterative segmentation of the image to "pull out" the various salient objects in the image. In the second method, we used a higher dimensional embedding of the image graph to estimate the saliency score and extract multiple salient objects. The last method, using a proposed heuristic based on eigenvalue difference, constructs a saliency map of an image using a predetermined number of smallest eigenvectors. Experimental results show that the proposed methods do succeed in extracting multiple foreground subject more successfully as compared to the original method.
Visual attention refers to the cognitive mechanism that allows us to select and process only the relevant information arriving at our eyes. Therefore, eye movements will have a significant dependency on visual attention. Saliency models, trying to simulate visual gaze and consequently, visual attention, have been continuously developed over the last years. Color information has been shown to play an important role in visual attention, and it is used in saliency computations. However, psychophysical evidence explaining the relationship between color and saliency is lacking. The results of the experiment will be presented aiming at studying and quantifying saliency of colors of different hues and lightness specified in CIELab coordinates. In the experiment, 12 observers were asked to report the number of color patches presented at random locations on a masking gray background. Eye movements were recorded using an SMI remote eye tracking system and being used to validate the reported data. In the presentation, we will compare the reported data and visual gaze data for different colors and discuss implications for our understanding of color saliency and color processing.
We evaluate improvements to image utility assessment algorithms with the inclusion of saliency information, as well as the saliency prediction performance of three saliency models based on successful utility estimators. Fourteen saliency models were incorporated into several utility estimation algorithms, resulting in significantly improved performance in some cases, with RMSE reductions of between 3 and 25%. Algorithms designed for utility estimation benefit less from the addition of saliency information than those originally designed for quality estimation, suggesting that estimators designed to measure utility also measure some degree of saliency information, and that saliency is important for utility estimation. To test this hypothesis, three saliency models are created from NICE and MS-DGU utility estimators by convolving logical maps of image contours with a Gaussian function. The performance of these utility-based models reveals that highlyperforming utility estimation algorithms can also predict saliency to an extent, reaching approximately 77% of the prediction performance of state-of-the-art saliency models when evaluated on two common saliency datasets.