Today, most advanced mobile phone cameras integrate multi-image technologies such as high dynamic range (HDR) imaging. The objective of HDR imaging is to overcome some of the limitations imposed by the sensor physics, which limit the performance of small camera sensors used in mobile phones compared to larger sensors used in digital single-lens reflex (DSLR) cameras. In this context, it becomes more and more important to establish new image quality measurement protocols and test scenes that can differentiate the image quality performance of these devices. In this work, we describe image quality measurements for HDR scenes covering local contrast preservation, texture preservation, color consistency, and noise stability. By monitoring these four attributes in both the bright and dark parts of the image, over different dynamic ranges, we benchmarked four leading smartphone cameras using different technologies and contrasted the results with subjective evaluations.
Due to the fast evolving technologies and the increasing importance of Social Media, the camera is one of the most important components of today's mobile phones. Nowadays, smartphones are taking over a big share of the compact camera market. A simple reason for this might be revealed by the famous quote: "The best camera is the one that's with you". But with the vast choice of devices and great promises of manufacturers, there is a demand to characterize image quality and performance in very simple terms in order to provide information that helps choosing the best-suited device. The current existing evaluation systems are either not entirely objective or are under development and haven't reached a useful level yet. Therefore the industry itself has gotten together and created a new objective quality evaluation system named Valued Camera eXperience (VCX). It is designed to reflect the user experience regarding the image quality and the performance of a camera in a mobile device. Members of the initiative so fare are: Apple, Huawei, Image Engineering, LG, Mediatec, Nomicam, Oppo, TCL, Vivo, and Vodafone.
Right now there are at least three publicly known ranking systems for cell phones (CPIQ [IEEE P1858, in preparation, DxOmark, VCX) that try to tell us which camera phone provides the best image quality. Now that IEEE is about to publish the P1858 standard with currently only 6 Image quality parameters the question arises how many parameters are needed to characterize a camera in a current cell phone and how important is each factor for the perceived quality. For testing the importance of a factor the IEEE cellphone image quality group (CPIQ) has created psychophysical studies for all 6 image quality factors that are described in the first version of IEEE P1858. That way a connection between the physical measurement of the image quality aspect and the perceived quality can be made.
Analyzing the depth structure implied in two-dimensional images is one of the most active research areas in computer vision. Here, we propose a method of utilizing texture within an image to derive its depth structure. Though most approaches for deriving depth from a single still image utilize luminance edges and shading to estimate scene structure, relatively little work has been done to utilize the abundant texture information in images. Our new approach begins by analyzing the two cues of local spatial frequency and orientation distributions of the textures within an image, which are used to compute the local slant information across the image. The slant and frequency information are merged to create a unified depth map, providing an important channel for image structure information that can be combined with other available cues. The capabilities of the algorithm are illustrated for a variety of images of planar and curved surfaces under perspective projection, in most of which the depth structure is effortlessly perceived by human observers. Since these operations are readily implementable in neural hardware in early visual cortex, they therefore represent a model of the human perception of the depth structure of images from texture gradient cues.
This paper analyzes the surface appearance of a textured fluorescent object. First, the bispectral radiance factor of the fluorescent object is decomposed into the fluorescent emission (luminescent) component and the reflection component, which are summarized as the Donaldson matrix. Second, the observed spectral images are described by a multiplication of two factors: one factor is spectral functions depending on wavelength and another is the weighting factors representing the surface texture. An algorithm is proposed to estimate the bispectral functions and the location weights from the spectral images observed under multiple illuminants. Third, the two textured images of the reflection component and the luminescent component are constructed using the estimates of the spectral function and the location weights. Also the surface appearances of the same fluorescent object under arbitrary illumination are constructed by combining the two component images.