Visual distortions in processed 360-degree visual content and consumed through head-mounted displays (HMDs) are perceived very differently when compared to traditional 2D content. To better understand how compression-related artifacts affect the overall perceived quality of 360-degree videos, this paper presents a subjective quality assessment study and analyzes the performance of objective metrics to correlate with the gathered subjective scores. In contrast to previous related work, the proposed study focuses on the equiangular cubemap projection and includes specific visual distortions (blur, blockiness, H.264 compression, and cubemap seams) on both monoscopic and stereoscopic sequences. The objective metrics performance analysis is based on metrics computed in both the projection domain and the viewports, which is closer to what the user sees. The results show that overall objective metrics computed on viewports are more correlated with the subjective scores in our dataset than the same metrics computed in the projection domain. Moreover, the proposed dataset and objective metrics analysis serve as a benchmark for the development of new perception-optimized quality assessment algorithms for 360-degree videos, which is still a largely open research problem.
In recent years, 3D reconstruction systems comprising multiple depth sensors have received increasing interest for dynamic scene reconstruction and related applications. Publicly available ground truth data are of limited usefulness when dealing with quality assessment of self-recorded data delivered by customized stereo configurations. In this paper, we propose a framework that incorporates versatile strategies for quantitative and qualitative evaluation of a multi-stereo reconstruction system and its intermediate products. Besides the design of suitable calibration objects for quantitative measurements, the framework exploits multiview data redundancy and generated novel views for objective quality assessment and to obtain subjective ratings from users. We demonstrate the applicability of our evaluation system in experiments with several stereo matching algorithms and view fusion approaches along with a pair-comparison based user study. We believe that our proposed evaluation framework is beneficial for the assessment of 3D products derived from self-recorded dynamic data of comparable set-ups, for example, in the context of subsequent augmented reality applications.
In this contribution, an objective metric for quality evaluation of light field images is presented. The method is based on the exploitation of the depth information of a scene, that is captured with high accuracy by the light field imaging system. The depth map is estimated both from the original and impaired light field data. Then, a similarity measure is applied, and a mapping is performed to link the depth distortion with the perceived quality. Experimental test performed by comparing state-of-art metrics with the proposed one, demonstrate the effectiveness of the proposed metric.