Portraits are one of the most common use cases in photography, especially in smartphone photography. However, evaluating portrait quality in real portraits is costly, inconvenient, and difficult to reproduce. We propose a new method to evaluate a large range of detail preservation renditions on realistic mannequins. This laboratory setup can cover all commercial cameras from videoconference to high-end DSLRs. Our method is based on 1) the training of a machine learning method on a perceptual scale target 2) the usage of two different regions of interest per mannequin depending on the quality of the input portrait image 3) the merge of the two quality scales to produce the final wide range scale. On top of providing a fine-grained wide range detail preservation quality output, numerical experiments show that the proposed method is robust to noise and sharpening, unlike other commonly used methods such as the texture acutance on the Dead Leaves chart.
Driving assistance is increasingly used in new car models. Most driving assistance systems are based on automotive cameras and computer vision. Computer Vision, regardless of the underlying algorithms and technology, requires the images to have good image quality, defined according to the task. This notion of good image quality is still to be defined in the case of computer vision as it has very different criteria than human vision: humans have a better contrast detection ability than image chains. The aim of this article is to compare three different metrics designed for detection of objects with computer vision: the Contrast Detection Probability (CDP) [1, 2, 3, 4], the Contrast Signal to Noise Ratio (CSNR) [5] and the Frequency of Correct Resolution (FCR) [6]. For this purpose, the computer vision task of reading the characters on a license plate will be used as a benchmark. The objective is to check the correlation between the objective metric and the ability of a neural network to perform this task. Thus, a protocol to test these metrics and compare them to the output of the neural network has been designed and the pros and cons of each of these three metrics have been noted.
Near-infrared (NIR) light sources have become increasingly present in our daily lives, which led to the growth of the number of cameras designed for viewing in the NIR spectrum (sometimes in addition to the visible) in the automotive, mobile, and surveillance sectors. However, camera evaluation metrics are still mainly focused on sensors in visible lights. The goal of this article is to extend our existing flare setup and objective flare metric to quantify NIR flare for different cameras and to evaluate the performance of several NIR filters. We also compare the results in both visible and NIR lighting for different types of devices. Moreover, we propose a new method to measure the ISO speed rating in visible light spectrum (originally defined in the ISO standard 12232) and an equivalent ISO for NIR spectrum with our flare setup.
Flare, or stray light, is a visual phenomenon generally considered undesirable in photography that leads to a reduction of the image quality. In this article, we present an objective metric for quantifying the amount of flare of the lens of a camera module. This includes hardware and software tools to measure the spread of the stray light in the image. A novel measurement setup has been developed to generate flare images in a reproducible way via a bright light source, close in apparent size and color temperature to the sun, both within and outside the field of view of the device. The proposed measurement works on RAW images to characterize and measure the optical phenomenon without being affected by any non-linear processing that the device might implement.
In this article, we propose a comprehensive objective metric for estimating digital camera system performance. Using the DXOMARK RAW protocol, image quality degradation indicators are objectively quantified, and the information capacity is computed. The model proposed in this article is a significant improvement over previous digital camera systems evaluation protocols, wherein only noise, spectral response, sharpness, and pixel count were considered. In the proposed model we do not consider image processing techniques, to only focus on the device intrinsic performances. Results agree with theoretical predictions. This work has profound implications in RAW image testing for computer vision and may pave the way for advancements in other domains such as automotive or surveillance camera.