Most digital cameras today employ Bayer Color Filter Arrays in front of the camera sensor. In order to create a true-color image, a demosaicing step is required introducing image blur and artifacts. Special sensors like the Foveon X3 circumvent the demosaicing challenge by using pixels lying on top of each other. However, they are not commonly used due to high production cost and low flexibility. In this work, a multi-color multi-view approach is presented in order to create true-color images. Therefore, the red-filtered left view and the blue-filtered right view are registered and projected onto the green-filtered center view. Due to the camera offset and slightly different viewing angles of the scene, object occlusions might occur for the side channels, hence requiring the reconstruction of missing information. For that, a novel local linear regression method is proposed, based on disparity and color similarity. Simulation results show that the proposed method outperforms existing reconstruction techniques by on average 5 dB.
In this study, advanced range measurement method using a TOF sensor with 4-tap output pixels and small-duty short light pulse is presented. A CMOS TOF range imager with pinnedphotodiode high-speed charge modulator pixels using lateral electric field (LEF) control has been implemented by a 0.11-μm CIS process with high near infrared sensitivity. In order to improve the range resolution while maintaining the measurable range, multiple time-windows are used for range measurements. Compared with the conventional single time window, the use of N time-windows theoretically improve the range resolution by a factor of N1.5 if the back ground light shot noise is dominant. In the measurement for N=2, the range resolution is improved by a factor of 2.8 compared with the case of N=1 while maintaining the same distance range.
The presence of dark current, signal charge which is not due to photons, has been a performance limiter for image sensors. There has been a 5000x decrease over 40 years and there is the assumption that this trend will continue. However, the decrease has been accompanied by a change of the nature of the generation mechanism as is seen in characterization data related to voltages and temperature. The present limiting root cause of dark current needs to be determined to guide further improvement. It is also interesting to speculate on the ultimate limitation of dark current in defect-free silicon.
A preliminary chip evaluation targeting a development of over 50Mfps burst global shutter stacked CMOS image sensor is reported in this work. A two dimensional CMOS image sensor chip with 34.56μmH × 34.56μmV equivalent pitch 25H × 100V pixels with ultra-high speed charge collection capability and in-pixel 80 analog memories was designed and fabricated using a 0.18μm 1-Poly-Si 5-Metal layer CMOS image sensor technology. The operation of the fabricated chip was confirmed to work with two modes: in-pixel correlated double sampling (CDS) mode with 80 frames up to 50 Mfps and direct readout mode with 40 frames up to 71.4 Mfps. Based on the developed architecture, over 50 Mfps with about 400 consecutive frames with 100% fill factor will be achieved using backside illumination 3D stacking technology with high-density analog memory.
Most of the snapshot HDR (High Dynamic Range) image sensors have a non-linear, programmable, response curve that requires multiple register settings. The complexity of the settings is such that most algorithms reduce the number of parameters to only two or three and calculate a smooth response curve that approaches a log response. The information available in the final image depends on the compression rate of the response curve and the quantization step of the device. In this early stage proposal, we make use of scene information and discrete information transfer to calculate the response curve shape that maximizes the information in the final image. The image may look different to a human but contains more useful information for machine vision processing. One important field of use of such sensors with programmable dynamic range is automotive on-board machine vision and more specifically autonomous vehicles.
Digital imaging sensors "Hot Pixels" defects accumulate as the camera ages over time at a rate that is highly dependent on pixel size. Previously we developed an empirical formula that projects hot pixel defect growth rates in terms of defect density (defects/year/mm2). We found that hot pixel densities grow via a power law, with the inverse of the pixel size raised to the power of ~3, and almost the square root of the ISO (gain). This paper experimentally explores these defect rates as pixels approach the 2 to 1 micron size. We developed techniques for observing hot pixels both in regular DSLRs (7.5-4um pixels) and in cell phones (1.3um). Cell phones imagers have the smallest pixels, but require careful measurement to detect. First statistical analysis distinguishes potential defects from the nearby noise in a 5x5 pixel area around each pixel. Then linear regression fits the hot pixel equation to a sequence of dark field exposures from short (0.008 sec) to long (2 sec) and accepts only those with high statistical significance. This greatly improved the power law fit for the 2-1 micron pixels.
An efficient method is presented to achieve 4T pixel dynamic range extension whilst keeping the fixed pattern noise (FPN) low. The method utilizes the floating diffusion node as a secondary photocharge integration node with a partial reset applied to the floating diffusion node. The output image signal is a recombination of the linear signal integrated in the photodiode node, and a compressed signal integrated in the floating diffusion node. The recombination algorithm suppresses the FPN stemming from the variable transfer gate potential barrier height. An image sensor array of 1344x1008 pixels was built in 0.11um technology. We demonstrate the preservation of low light sensitivity of 4T pixels, a dark noise of 3.5e-, and a dynamic range for the current sensor of 115 dB, with potential to greatly exceed 120-dB. Further, the FPN stemming from the transfer gate potential barrier height variation does not degrade the image; the pixel compressive response can be controlled on a per frame basis depending on the scene, and the HDR scene is captured during one frame. The sensor combines the properties that are crucial for automotive and other machine vision applications.
Among the various techniques that allow the acquisition of the depth of the scene, Depth from Focus (DfF) technique is a good candidate for low-resources real-time embedded systems. Indeed it relies on low complexity processing and requires one single camera. On the other hand, the large data dependency imposed by the size of a focus-cube must be tackled in order to ensure the embeddability of the algorithm. This paper presents algorithm improvement and an architecture optimized for both processing complexity and memory footprint. For full-HD images, this architecture can produce depth and confidence maps in real time using roughly 1.4p arithmetic operations per pixel, where p is the number of depth planes, without the need of a multiplier, while the needed memory footprint is equivalent to 6% of one frame. All in focus images can also be processed on-the-fly to the price of an additional 2 frames memory buffer.
The most known impact of charge lag in image sensors is the phenomenon of "ghost images" which refer to the residual images created as a result of the trapped charge as the scene transitions from bright to dark. These "ghost" artifacts are particularly problematic for video imaging and, therefore, traditional lag characterization has focused on quantifying inter-frame charge smear. In still imaging, however, the impact of lag on image quality has not been thoroughly characterized. This work studies the effect of lag that is caused by photodiode-barrier on image quality in still imaging with CMOS image sensors. It shows a direct correlation between lag and fixed-pattern noise, demonstrates color artifacts that are caused by lag, and explains the mechanism that results in appearance of these artifacts.