Today, most advanced mobile phone cameras integrate multi-image technologies such as high dynamic range (HDR) imaging. The objective of HDR imaging is to overcome some of the limitations imposed by the sensor physics, which limit the performance of small camera sensors used in mobile phones compared to larger sensors used in digital single-lens reflex (DSLR) cameras. In this context, it becomes more and more important to establish new image quality measurement protocols and test scenes that can differentiate the image quality performance of these devices. In this work, we describe image quality measurements for HDR scenes covering local contrast preservation, texture preservation, color consistency, and noise stability. By monitoring these four attributes in both the bright and dark parts of the image, over different dynamic ranges, we benchmarked four leading smartphone cameras using different technologies and contrasted the results with subjective evaluations.
Smartphone cameras have progressed a lot during recent years and even caught up with entry-level DSLR cameras in many standard situations. One domain where the difference remained obvious was portrait photography. Now smartphone manufacturers equip their flagship models with special modes where they computationally simulate shallow depth of field. We propose a method to quantitatively evaluate the quality of such computational bokeh in a reproducible way, focusing on both the quality of the bokeh (depth of field, shape), as well as on artifacts brought by the challenge to accurately differentiate the face of a subject from the background, especially on complex transitions such as curly hairs. Depth of field simulation is a complex topic and standard metrics for out-of-focus blur do not currently exist. The proposed method is based on perceptual, systematic analysis of pictures shot in our lab. We show that the depth of field of the best mobile devices is as shallow as that of DSLRs, but also reveal processing artifacts that are inexistent on DSLRs. Our primary goal is to help customers comparing smartphone cameras among each other and to DSLRs. We also hope that our method will guide smartphone makers in their developments and will thereby contribute to advancing mobile portrait photography.
The IEEE P1858 CPIQ Standard is a new industry standard for assessing camera image quality on mobile devices. The CPIQ standard provides test methodologies for evaluating seven image attributes: spatial frequency response, texture blur, visual noise, color uniformity, chroma level, lateral chromatic displacement, and local geometric distortion. In addition, the CPIQ standard provides mathematical transforms between objective metric values and perceived image quality quantifiable in just noticeable differences, and a framework to combine individual attributes into prediction of overall image quality. This study aims at validating the CPIQ set of image quality metrics and the CPIQ prediction of overall image quality. The two key components of the study are objective measurements of image quality in the lab and subjective evaluation of real-world images by human observers. Nine smartphones were used in the study, with the expected camera quality ranging from low to high. The CPIQ methodology was implemented and practiced in an industrial lab, and measurements of the CPIQ metrics were obtained at varying lighting conditions. The subjective evaluation study was performed in a university lab, using paired comparison and softcopy quality ruler as test methods. The results from this study revealed that objective measurements defined in the CPIQ standard are highly correlated with perceived image quality.
Nowadays many cameras embed multi-imaging (MI) technology without always giving the option to the user to explicitly activate or deactivate it. MI means that they capture multiple images, combine them and give a single final image, letting sometimes this procedure being completely transparent to the user. One of the reasons why this technology has become very popular is that natural scenes may have a dynamic range that is larger than the dynamic range of a camera sensor. So as to produce an image without under- or over-exposed areas, several input images are captured and later merged into a single high dynamic range (HDR) result. There is an obvious need for evaluating this new technology. In order to do so, we will present laboratory setups conceived so as to exhibit the characteristics and artifacts that are peculiar to MI, and will propose metrics so as to progress toward an objective quantitative evaluation of those systems. On the first part of this paper we will focus on HDR and more precisely on contrast, texture and color aspects. Secondly, we will focus on artifacts that are directly related to moving objects or moving camera during a multi-exposure acquisition. We will propose an approach to measure ghosting artifacts without accessing individual source images as input, as most of the MI devices most often do not provide them. Thirdly, we will expose an open question arising from MI technology about how the different smartphone makers define the exposure time of the single reconstructed image and will describe our work around a timemeasurement solution. The last part of our study concerns the analysis of the degree of correlation between the objective results computed using the proposed laboratory setup and subjective results on real natural scenes captured using HDR ON and OFF modes of a given device.
We propose an objective measurement protocol to evaluate the autofocus performance of a digital still camera. As most pictures today are taken with smartphones, we have designed the first implementation of this protocol for devices with touchscreen trigger. The lab evaluation must match with the users' real-world experience. Users expect to have an autofocus that is both accurate and fast, so that every picture from their smartphone is sharp and captured precisely when they press the shutter button. There is a strong need for an objective measurement to help users choose the best device for their usage and to help camera manufacturers quantify their performance and benchmark different technologies.