Regular
Bayesian estimation
FPGA
Image Quality Assessment
Joint Distribution
Laplacian of Gaussian
RAW imageReduced-Reference
Saturated pixels correction
Whiten
 Filters
Month and year
 
  22  0
Image
Pages 1 - 4,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

Modern digital cameras include an image processing pipeline that converts raw sensor data to a rendered RGB image. Several key steps in the pipeline operate on spatially localized data (demosaicking, noise reduction, color conversion). We show how to derive a collection of local, adaptive linear filters (kernels) that can be applied to each pixel and its neighborhood; the adaptive linear calculation approximates the performance of the modules in the conventional image processing pipeline. We also derive a set of kernels from images rendered by expert photographers. In both cases, we evaluate the accuracy of the approximation by calculating the difference between the images rendered by the camera pipeline with the images rendered by the local, linear approximation. The local, linear and learned (L3) kernels approximate the camera and expert processing pipelines with a mean S-CIELAB error of ΔE < 2. A value of the local and linear architecture is that the parallel application of a large number of linear kernels works well on modern hardware configurations and can be implemented efficiently with respect to power.

Digital Library: EI
Published Online: February  2016
  13  0
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Digital Library: EI
Published Online: February  2016
  146  67
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

Lens Vignetting is a common distortion in imaging systems where the intensity of the image decreases gradually away from the image center. Incorrect compensation of this luminance degradation often results in a more disturbing artifact called color shading, where the resulting image is left with shades of different colors. This paper presents a novel adaptive technique to compensate for lens Vignetting, while preserving color saturation and eliminating the need for per-unit calibration. As a consequence, the performance of automatic white balance algorithms have also been improved. The proposed algorithm is lightweight and realizable on FPGA architecture for real time Vignetting correction. The experimental results indicate a significant improvement in the quality of the corrected images. It is expected that the proposed algorithm will be implemented on mobile Image Signal Processors (ISPs) in the near future.

Digital Library: EI
Published Online: February  2016
  59  10
Image
Pages 1 - 5,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

In this paper we present an algorithm for estimating camera system modulation transfer functions (MTF) from highly distorted images. This algorithm estimates multiple oversampled line spread functions using different order polynomials and binning. The ensemble of oversampled line spread functions (LSF) are used to estimate the final system MTF. Moreover, the LSF derived from the highest order polynomial without overfitting is used to estimate the system MTF. We show the performance of this algorithm on synthetically generated images and on image captures from a commercially available camera. We compare this algorithm to the ISO 12233 standard.

Digital Library: EI
Published Online: February  2016
  22  2
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

Pixel saturation is very common in the process of digital color imaging. From the perspective of optics, the CCD or CMOS achieve the maximum charge. It is important to relate an image to the light of the scene from which the image was captured. This paper presents a hardware implementation with a FPGA circuit of an algorithm to estimate saturated pixels in RAW image based on the principle of Bayesian estimation. In order to improve the accuracy of Bayesian estimation, the digital morphological dilation and connected component labeling are used to divide the saturated region. There may be three kinds of color saturation for each region. The Bayesian algorithm based on Xu’ work was used to deal with 1-channel saturation. We improved the 2-channel saturation algorithm using the unsaturated channel to predict the saturation. We proposed the 3-channel saturation using surrounding pixels. Experiments show the proposed method in hardware implementation is more effective in correcting two or three color channel saturation.

Digital Library: EI
Published Online: February  2016
  33  1
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

In this paper we describe and verify a method, called SMIP, to circumvent the trade-off between motion blur and noise, specifically for scenes with predominantly two distinct linear motions (sparse motion). This is based on employing image stabilization hardware to track objects during exposure while capturing two images in quick succession. The two images are combined into a single sharp image without segmentation or local motion estimation. We provide a theoretical analysis and simulations to show that the Signal-to-Noise Ratio (SNR) increases up to 20 dB over conventional short-exposure photography. We demonstrate that the proposed method significantly improves the SNR compared to existing methods. Furthermore, we evaluate a proof-of-concept using modified off-the-shelf optical image stabilization hardware to verify the effectiveness of our method in practice, showing a good correspondence between the simulation and practical results.

Digital Library: EI
Published Online: February  2016
  26  0
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

Because of digital technologies, high-definition television (HDTV) has become a common video system. Commercial 4K TV broadcasting has started in Japan in 2015. 8K is also under developing, and the 8K TV broadcasting service is planned in 2020. 4K/8K are prospective media, and their high-resolution video has various possibilities for broadcasting, medical, publishing, and others. With regard to the video system equipment, cameras are very important because they determine the image quality, particularly resolution. However, a fine focus is more important than the capability of a camera. Professional HD/4K/8K cameras are not equipped with auto-focus systems because the focus areas should be controlled manually due to production-making requirements. The image is blurry if the focus is not fine. Because of operability, small and light-weight cameras are preferred. The size of the viewfinders is similar with that of the HDTV cameras. It is extremely difficult to adjust the focus with a small viewfinder even for a professional video camera person. Large bulky LCDs are used to adjust the focus even for outside broadcasting. They reduce the performance and obstruct mobility. Focus aid systems are proposed to cope with the focus issues. They are based on the linear signal processing method. However, these do not have tolerance against noise. Noise always appears if the lighting condition is not sufficient. In this paper, a nonlinear signal processing is proposed to fix the noise issue. The nonlinear focus assist method also shows a more accurate focus and smaller areas than the current systems.

Digital Library: EI
Published Online: February  2016
  94  4
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

Most stereo cameras are equipped with lens modules of fixed focus to capture images. To provide sharp images over a wide range of object distances, stereo cameras with adjustable focus lens modules have been developed. Although such stereo cameras can be controlled by performing existing monocular autofocus for each individual camera independently, the lack of coordination between the stereo cameras and the underutilization of depth information embedded in the image data make the approach not as efficient as it can be. In this paper, we propose a novel stereo autofocus approach that exploits the disparity of stereo images to control lens movement. Experimental results show that the proposed approach can bring the lenses to the peak zone of the focus profile within two lens movements in most (92.3%) cases even if the lenses are initially far from the in-focus position.

Digital Library: EI
Published Online: February  2016
  219  66
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

In recent years, many applications using a pair of RGB and near-infrared (NIR) images have been proposed in computer vision and image processing communities. Thanks to recent progress of image sensor technology, it is also becoming possible to manufacture an image sensor with a novel spectral filter array, which has RGB plus NIR pixels for one-shot acquisition of the RGB and the NIR images. In such a novel filter array, half of the G pixels in the standard Bayer color filter array (CFA) are typically replaced with the NIR pixels. However, its performance has not fully been investigated in the pipeline of single-sensor RGB and NIR image acquisition. In this paper, we present an imaging pipeline of the single-sensor RGB and NIR image acquisition and investigate its optimal performance by taking account of the filter array pattern, demosaicking and color correction. We also propose two types of filter array patterns and demosaicking algorithms for improving the quality of acquired RGB and NIR images. Based on the imaging pipeline we present, the performance of different filter array patterns and demosaicking algorithms is evaluated. In experimental results, we demonstrate that our proposed filter array patterns and demosaicking algorithms outperform the existing ones.

Digital Library: EI
Published Online: February  2016
  38  3
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 18

Chromatic flare artifacts, purple flare in particular, are objectionable color artifacts affecting image quality in digital photography systems ranging from DSLR to mobile imaging cameras. Although they originate from internal reflections and scattering in the camera module, they can be diminished with proper pixel design strategies. This work presents a method to quantify an image sensor susceptibility to chromatic flare artifacts. It is based on measurements of the spectral response of pixels for varied angles of incidence, and subsequent analysis of color properties of synthetic images that are processed through a conventional pipeline. Experimental work has been done with image sensors that differ by their color filter array and grid properties. Results show that use of metal grid and deep trench isolation—state-of-the-art pixel fabrication approaches that were developed to suppress crosstalk—are advantageous for mitigation of chromatic flare artifacts.

Digital Library: EI
Published Online: February  2016

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]