Quality assessment is performed through the use of variety of quality attributes. It is crucial to identify relevant attributes for quality assessment. We focus on 2.5D print quality assessment and its quality attributes. An experiment with observers showed the most frequently used attributes to judge quality of 2.5D prints with and without reference images. Colour, sharpness, elevation, lightness, and naturalness are the top five the most frequently used attributes for both with and without reference cases. We observed that content, previous experience and knowledge, and aesthetic appearance may impact quality judgement.
In this work, we present the results of a psycho-physical experiment in which a group of volunteers rated the quality of a set of audio-visual sequences. The sequences had up to three types of distortions: video coding, packet-loss, and frame freezing distortions. The original content used for the experiment consisted of a set of high definition audio-visual sequences. Impairments were only inserted into the video component of the sequences, while the audio component remained unimpaired. The objective of this particular experiment was to analyze different types of source degradations and compare the transmission scenarios where they occur. Given the nature of these degradations, the analysis is focused on the visual component of the sequence. The experiment was conducted using the basic directions of the immersive experimental methodology.
The quality assessment of Depth-Image-Based-Rendering (DIBR) synthesized views is very challenging owing to the new types of distortions, thus the traditional 2D quality metrics may fail to evaluate the quality of the synthesized views. In this paper, we propose a full-reference metric to assess the quality of DIBR synthesized views. Firstly, we notice that the object shift in the synthesized view is approximately linear, an affine transformation is used to warp the pixel in the reference image to the corresponding position in the distorted image. Besides, since the synthesis distortions mainly happen in the dis-occluded areas, a dis-occlusion mask obtained from the depth map in the original viewpoint is used to weight the final distortions between the synthesized image and the reference image. The experimental results on IRCCyN/IVC DIBR image database show that the proposed weighted PSNR (PSNR') outperforms the state-of-the-art DIBR synthesized view dedicated metrics: 3DSwIM, VSQA, MP-PSNR, MW-PSNR and earns a gain of 36.85% (in terms of PLCC) over PSNR. The weighted SSIM (SSIM') earns a gain of 13.33% (in terms of PLCC) compared to SSIM.
3D mesh becomes a common tool used in several computer vision applications. The performances of these applications depend highly on its quality. In order to quantify it, several methods have been proposed in the literature. In this paper, we propose a 3D Mesh Quality Measure based on the fusion of some selected features. The goal is here to take into account the advantages of these features and thus improve the global performance. The selected features are here some 3D mesh quality metrics and a geometric attribute. The fusion step has been realized using a Support Vector Regression (SVR) model. The 3D Mesh General database has been used to evaluate our method. The obtained results, in terms of correlation with the subjective judgments, show the relevance of the proposed framework.