Recently, with the release of 108 mega pixel resolution image sensor, the photo quality of smartphone camera, including detail, and texture, is getting much higher. This became possible only because by utilizing the remosaic technology which re-organize color filter arrays into the Bayer patterns compatible to existing Image Signal Processor (ISP) of commodity AP. However, the optimized parameter configurations of the remosaic block require lots of efforts and long tuning period in order to secure the desired image quality level and sensor characteristics. This paper proposes a deep neural network based camera auto-tuning system for the remosaic ISP block. Firstly, considering the learning phase, big image quality database is created in the random way using reference image and tuning register. Second, the virtual ISP model has been trained in order that predicts image quality by changing sensor tuning registers. Finally, the optimization layer generates the sensor remosaic parameters in order to achieve the user’s target image quality expectation. By experiment, the proposed system has been verified to secure the image quality at the level of professionally hand-tuned photography. Especially, the remosaic artifact of false color, color desaturation and line broken artifacts are improved significantly by more than 23%, 4%, and 12%, respectively.
In camera development, because the image quality is subjective and the tuning complexity is increasing, building a correlated model with image signal processor (ISP) pipeline is very demanding task. In order to overcome those problems, this paper proposes an automatic image quality tuning framework based on Deep Neural Network (DNN). The image quality metric (IQM) have been defined to quantifies subjective image quality, which effectively represents the actual user perception. In this way, fast reproduction of the desired image has been possible through the minimized computing resource. Proposed Optimization methodology consists of Phase 1, a ISP modeling, and Phase 2, parameter optimization. Phase 1 construct a model between the parameters of ISP and the image quality metric. At phase 2, we add partially connected layer at input layer in order to optimize the parameters of ISP. Using backpropagation approach, the network selectively updates only the weights of partial connections, which allow to automatically derive the optimal parameters for high quality image. This idea has been implemented and experimented through commercial 16 Mega pixel resolution CMOS image sensor (CIS) and the state-of-the art ISP.
Inventory management and handling in warehouse environments have transformed large retail fulfillment centers. Often hundreds of autonomous agents scurry about fetching and delivering products to fulfill customer orders. Repetitive movements such as these are ideal for a robotic platform to perform. One of the major hurdles for an autonomous system in a warehouse is accurate robot localization in a dynamic industrial environment. Previous LiDAR-based localization schemes such as adaptive Monte Carlo localization (AMCL) are effective in indoor environments and can be initialized in new environments with relative ease. However, AMCL can be influenced negatively by accumulated odometry drift, and is also reliant primarily on a single modality for scene understanding which limits the localization performance. We propose a robust localization system which combines multiple sensor sources and deep neural networks for accurate real-time localization in warehouses. Our system employs a novel deep neural network architecture consisting of multiple heterogeneous deep neural networks. The overall architecture employs a single multi-stream framework to aggregate the sensor information into a final robot location probability distribution. Ideally, the integration of multiple sensors will produce a robust system even when one sensor fails to produce reliable scene information.