Smartphone cameras have progressed a lot during recent years and even caught up with entry-level DSLR cameras in many standard situations. One domain where the difference remained obvious was portrait photography. Now smartphone manufacturers equip their flagship models with special modes where they computationally simulate shallow depth of field. We propose a method to quantitatively evaluate the quality of such computational bokeh in a reproducible way, focusing on both the quality of the bokeh (depth of field, shape), as well as on artifacts brought by the challenge to accurately differentiate the face of a subject from the background, especially on complex transitions such as curly hairs. Depth of field simulation is a complex topic and standard metrics for out-of-focus blur do not currently exist. The proposed method is based on perceptual, systematic analysis of pictures shot in our lab. We show that the depth of field of the best mobile devices is as shallow as that of DSLRs, but also reveal processing artifacts that are inexistent on DSLRs. Our primary goal is to help customers comparing smartphone cameras among each other and to DSLRs. We also hope that our method will guide smartphone makers in their developments and will thereby contribute to advancing mobile portrait photography.
The quality assessment of Depth-Image-Based-Rendering (DIBR) synthesized views is very challenging owing to the new types of distortions, thus the traditional 2D quality metrics may fail to evaluate the quality of the synthesized views. In this paper, we propose a full-reference metric to assess the quality of DIBR synthesized views. Firstly, we notice that the object shift in the synthesized view is approximately linear, an affine transformation is used to warp the pixel in the reference image to the corresponding position in the distorted image. Besides, since the synthesis distortions mainly happen in the dis-occluded areas, a dis-occlusion mask obtained from the depth map in the original viewpoint is used to weight the final distortions between the synthesized image and the reference image. The experimental results on IRCCyN/IVC DIBR image database show that the proposed weighted PSNR (PSNR') outperforms the state-of-the-art DIBR synthesized view dedicated metrics: 3DSwIM, VSQA, MP-PSNR, MW-PSNR and earns a gain of 36.85% (in terms of PLCC) over PSNR. The weighted SSIM (SSIM') earns a gain of 13.33% (in terms of PLCC) compared to SSIM.
The depth of field (DOF) of an auto-stereoscopic display refers to the depth range in 3D space in which objects can be depicted with small amount of blur. It provides a measurable index on the display's performance in reproducing light fields of 3D scenes. Previous studies have analyzed the maximum spatial frequencies of aliasing-free images depicted on planes parallel to the display's surface. For multilayer displays, several formulae representing the upper bounds on the maximum frequencies have been given. However, these formulae provide little information on how much blur would be present in the reproduced fields, since contributions of low frequency signals are simply neglected. Such signals are frequently damaged on multilayer displays especially when the angular range of viewing angles becomes wide. To address these drawbacks, we present a novel framework for the DOF analysis of multilayer displays. The analysis begins with a close look at the synthesis of layer images, which can be considered as solving a linear least squares problem with nonnegativity constraints. This numerical procedure is then reinterpreted in the context of multilayer displays, where part of the connections between "depth" and "blur" are observed. Finally, experimental results supporting these observations are presented.