Camera denoising and sharpening parameters are device related rigid parameters which are programmed in phone camera device. The current tuning method depends solely on manual modulation and visual evaluation of image quality, which is time consuming and difficult to optimally achieve. To this end, we will introduce an automatic tuning method for mobile cameras in this paper, which can tune the WNR parameters automatically and produce high quality images within a feasible processing time. The method contains two parts, a perception model and an optimization algorithm. For the first part, we developed a perception model to evaluate the image quality for mobile cameras through modified CPIQ metrics. For the second part, in order to overcome a high-dimension non-convex optimization problem, we developed a searching strategy to find the optimal solution by conducting quantization and iteratively minimizing the error metric of the perception model.
The implementation of automatic, adaptive filters in consumer imaging devices represents challenges to sharpness and resolution evaluation. The widely used e-SFR and other methods based on sine-waves and line targets are not necessarily representative of the capture of natural scene information. The recent dead leaves target is aimed at producing texture-MTFs that describe the capture of image detail under automatic non-linear, and contentaware processes. A newer approach to the texture-MTF measurement that substitutes the dead leaves target with pictorial images is presented in this paper. The aim of the proposed method is to measure effective-MTFs indicative of system characteristics for given scenes and camera processes. Nine pictorial images, portraying a variety of subjects and textures, were set as targets for a DSLR camera and a high-end smartphone camera. Computed MTFs were found to be congruent with the dead leaves MTF. Scene dependency was reported mainly for the smartphone camera measurements, providing insight into the performance of the content-dependent processes. Results from the DLSR camera images, captured with minimum non-adaptive operations, were reasonably consistent for the majority of the scenes. Based on variations in scene-dependent MTFs, we make recommendations for scene content that is best for texture-MTF analysis.
In this study a new camera testing method is introduced to determine and analyze the autofocus latency of cameras. This analysis allows for objective comparison and tuning of autofocus algorithms in order to deliver both sharp images and the optimal user experience with a camera. Given images taken in variable illuminance conditions with different methods of focus reset, along with high-speed recordings of the camera viewfinder throughout the reset and capture, machine vision is used to extract three different types of latencies: • The first latency is the autofocus time, which is measured from the end of the focus reset to full stability, as measured by slanted-edge sharpness in the camera viewfinder. • The next latency is the user interface latency, which also comes from the viewfinder and is the time between the camera trigger and when the user interface of the camera indicates that a capture took place. • The final latency is the captured image latency, which is taken from the captured image itself and is the time between the camera trigger and when the image is actually captured. In addition, we measure the sharpness of the final captured image in each test. Commercially available smartphone devices were tested using this method, showing significantly different results in both latency and sharpness measurements and uncovering trends in sharpness-latency trade-offs.