Recently, many deep learning applications have been used on the mobile platform. To deploy them in the mobile platform, the networks should be quantized. The quantization of computer vision networks has been studied well but there have been few studies for the quantization of image restoration networks. In previous study, we studied the effect of the quantization of activations and weight for deep learning network on image quality following previous study for weight quantization for deep learning network. In this paper, we made adaptive bit-depth control of input patch while maintaining the image quality similar to the floating point network to achieve more quantization bit reduction than previous work. Bit depth is controlled adaptive to the maximum pixel value of the input data block. It can preserve the linearity of the value in the block data so that the deep neural network doesn't need to be trained by the data distribution change. With proposed method we could achieve 5 percent reduction in hardware area and power consumption for our custom deep network hardware while maintaining the image quality in subejctive and objective measurment. It is very important achievement for mobile platform hardware.
Recently, many deep learning applications have been used on the mobile platform. To deploy them in the mobile platform, the networks should be quantized. The quantization of computer vision networks has been studied well but there have been few studies for the quantization of image restoration networks. In this paper, we studied the effect of the quantization of activations for deep learning network on image quality following previous study for weight quantization for deep learning network. This study is also about the quantization on raw RGBW image demosaicing for 10 bit image while fixing weight bit as 8 bit. Experimental results show that 11 bit activation quantization can sustain image quality at the similar level with floating-point network. Even though the activations bit-depth can be very small bit in the computer vision applications, but image restoration tasks like demosaicing require much more bits than those applications. 11 bit may not fit the general purpose hardware like NPU, GPU or CPU but for the custom hardware it is very important to reduce its hardware area and power as well as memory size.
Can a mobile camera see better through display? Under Display Camera (UDC) is the most awaited feature in mobile market in 2020 enabling more preferable user experience, however, there are technological obstacles to obtain acceptable UDC image quality. Mobile OLED panels are struggling to reach beyond 20% of light transmittance, leading to challenging capture conditions. To improve light sensitivity, some solutions use binned output losing spatial resolution. Optical diffraction of light in a panel induces contrast degradation and various visual artifacts including image ghosts, yellowish tint etc. Standard approach to address image quality issues is to improve blocks in the imaging pipeline including Image Signal Processor (ISP) and deblur block. In this work, we propose a novel approach to improve UDC image quality - we replace all blocks in UDC pipeline with all-in-one network – UDC d^Net. Proposed solution can deblur and reconstruct full resolution image directly from non-Bayer raw image, e.g. Quad Bayer, without requiring remosaic algorithm that rearranges non-Bayer to Bayer. Proposed network has a very large receptive field and can easily deal with large-scale visual artifacts including color moiré and ghosts. Experiments show significant improvement in image quality vs conventional pipeline – over 4dB in PSNR on popular benchmark - Kodak dataset.
Under Display Camera(UDC) technology is being developed to eliminate camera holes and place cameras behind display panels according to full display trend in mobile phone. However, these camera systems cause attenuation and diffraction as light passes through the panel, which is inevitable to deteriorate the camera image. In particular, the deterioration of image quality due to diffraction and flares is serious, in this regard, this paper discusses techniques for restoring it. The diffraction compensation algorithm in this paper is aimed at real-time processing through HW implementation in the sensor for preview and video mode, and we've been able to use effective techniques to reduce computation by about 40 percent.