Regular
ADC and other image sensor blocks, photodiodes, pixels, and processes
computational photographyCMOS Image SensorCMOS image sensors (CIS)
digital photography
Event Sensing
HDR
imaging algorithmsimage sensorsImage stitchingimaging systemsimmersive capture systemsImage sensor
machine learning applications in Imagingmedical imagingmobile Imagingmulti-camera system
Programmable camera
RGBW CFARGBW Color Filter Array
smart image sensors
 
3D integration Neuromorphic Imaging Multicamera all pixel AF exposure bracketing Event modeling color filter array (CFA) Column-parallel ADC color correction camera calibration Offset calibration low power Image Restoration TCG image stacking Quantization silicon-on-insulator (SOI) Digital Pixel Sensor de-mosaic ESP32-CAM geometric distortion single-slope ADC Latency Open source noise evaluation bonding panorama A/D converters Demosaicing quantum efficiency camera pose FPN compensation low noise scanning back Event simulation multiple conversion gain
 Filters
Month and year
 
  62  28
Image
Page ,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

Solid state optical sensors and solid state cameras have established themselves as the imaging systems of choice for many demanding professional applications such as automotive, space, medical, scientific and industrial applications. The advantages of low-power, low-noise, high-resolution, high-geometric fidelity, broad spectral sensitivity, and extremely high quantum efficiency have led to a number of revolutionary uses. The conference will focus on image sensing topics as listed below, bringing together researchers, scientists, and engineers working in these fields, offering the opportunity for quick publication of their work.

Digital Library: EI
Published Online: January  2022
  86  35
Image
Pages 155-1 - 155-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

A column-parallel 10-bit SAR ADC for high-speed image sensors has been implemented. A fast offset calibration technique using memory is proposed to compensate for the offset mismatch, accompanied by an ADC designed for a narrow space the size of a single column pitch. The memory accumulates the variation of the offset to track the offset within two cycles. After applying the offset calibration technique, the offset variation of the ADC measured in each column is improved from 4.27LSB to 0.39LSB. The fixed-pattern noise (FPN) is also improved from 4.14LSB to 0.34LSB. This calibration method covers an offset range of ±32LSB. The implemented ADC achieves a maximum speed of 500kS/s. The maximum frame rate of the sensor is 3000fps. The power consumption of the sensor, excluding the LVDS interface, is 71mW. This sensor is designed in a TowerJazz CIS 180nm process with one poly four metal. The supply voltage of the analog and digital domains is 3.3V and 1.8V, respectively.

Digital Library: EI
Published Online: January  2022
  95  29
Image
Pages 183-1 - 183-5,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

Recently, commercial vision sensors hit the mobile market. To achieve that, computer vision networks had to be quantized. However, this topic was not studied well for Image Signal processor (ISP) challenging image restoration tasks, being crucially important for hardware implementation, as well as for deployment on hardware accelerators, e.g. Neural Processors Units (NPU). In this paper, we studied the effect of the quantization of deep learning network on image quality. We tried various quantization on raw RGBW image demosaicing. Experimental results show that 12 bit weight quantization can sustain image quality at the similar level with floating-point network. 10 bit quantized network shows slight degradation in objective image quality and mild visual artifacts. If network weight’s bit-depth can be significantly reduced for computer vision tasks, our finding shows that it is not true for raw image restoration tasks: at least 10 bit weights are required to provide sufficient image quality. However, we can save some memory on feature maps bit-depth. We can conclude that network bit depth is critical for raw image restoration.

Digital Library: EI
Published Online: January  2022
  206  8
Image
Pages 199-1 - 199-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

A composite image is an image created by combining portions of multiple separately-captured images. Stitching of captures of tiled portions of a larger scene can be used to produce a single composite image (a panorama) with a wider view angle and higher total resolution. Image stacking is a different type of compositing, in which the scene is not changing significantly across captures, but camera parameters might be systematically varied. Focus stacking can extend the depth of field, aperture stacking can implement apodization shaping the out-of-focus point spread function, and noise and motion reduction can be accomplished even using the same camera parameters for each capture to be stacked. These and other compositing methods are well known and commonly used, but the same fixed pattern is commonly used for ordering of captures and choice of capture parameters. This paper examines the problem of static, pseudo-static, or dynamic determination of the optimal capture parameters and ordering.

Digital Library: EI
Published Online: January  2022
  85  25
Image
Pages 200-1 - 200-5,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

Panchromatic Color Filter Arrays with white signal were introduced a while ago, such as RGBW Kodak (CFA2.0) array, assuming to have better resolution in lowlight due to panchromatic signal. However, there is no successful RGBW image sensor in the industry targeting mobile cameras until now. In this work, we introduce a novel Samsung RGBW image sensor and we study its performance in a popular remosaic scenario. We propose a DePhaseNet - a deep fully convolutional network to solve RGBW remosaicing or demosaicing problem. We propose to have 3 layers of phase differentiated inputs and custom frequency based loss function for each layer. Proposed method successfully suppress False Colors inherent to RGBW sensor due to heavily under-sampled colors. By using this method, we were able not only to increase details preservation, but also increased color reproduction by 2% over conventional method. We found that RGBW sensor is beneficial not only in low light scenarios, but also in widely spread remosaic scenarios. Experiments show improvement in image quality, yielding CPSNR of 42dB for Kodak dataset, reaching the bar of Bayer CFA demosaicing result. Proposed method advances state-of-the-art in RGBW demosaic, by 6dB in CPSNR.

Digital Library: EI
Published Online: January  2022
  107  38
Image
Pages 201-1 - 201-3,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

Digital color cameras detect scenes through color filter array (CFA) of mosaic patterns. Among existing filter arrays in commercial CMOS image sensors (CIS), the CMY color filter set is one that is a more desirable choice for low-illuminating conditions. However, these color filters generally suffered from lower color fidelity than their counterpart RGB color filters. Nevertheless, the overall CIS performance were affected not only by the pigments used for color filters but also by the detailed structure designs and CFA arrangement of the pixel itself, we tends to explore the best combination for both the color filter materials and the pixel designs that will improve CIS performance for low light conditions that will have applications in astrophotography, low phototoxicity bioimaging, as well as general photography during dawn and dusk.

Digital Library: EI
Published Online: January  2022
  52  17
Image
Pages 231-1 - 231-7,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

Incorporating in the geometric calibration images in which the calibration chart is only partially visible can help to make calibration process much more efficient and more accurate. This is particularly true in the case of systems involving a larger number of cameras. A calibration tool developed by us that makes it possible to utilize also images in which only a part of the chart can be detected is described and the benefits of using such a tool compared to the traditional checkerboard chart calibration are demonstrated. Examples illustrating the implications of the requirement that all chart points must be detected in the image on the spatial distribution of the corner points used for calibration are shown and the impact of the resulting distribution on the accuracy of the calibration is analyzed, using both synthetic and real data sets.

Digital Library: EI
Published Online: January  2022
  596  150
Image
Pages 232-1 - 232-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

Experimenting with custom-programming of cameras can be difficult. Most consumer cameras are protected to prevent users from reprogramming them. Industrial cameras can be flexibly controlled by an external computer, but are generally not stand-alone programmable devices. However, various inexpensive camera modules, designed largely to be used for building IoT (Internet of Things) devices, combine extensive programmability with a camera in a compact, low-power, module. One of the smallest and least expensive, the ESP32-CAM module, combines a 2MP Omnivision OV2640 camera with a dual-core 32-bit processor, 802.11 WiFi and BlueTooth as well as wired I/O interfaces, a microSD slot, low power modes, etc., all supported by the Arduino programming environment and a rich collection of open source libraries. Why not use it for programmable camera research? This paper describes how the ESP32-CAM had to be adapted to enable use in a variety of experimental cameras. For example, some of these cameras do not use the lens screwed and glued onto the OV2640, and replacing this lens revealed a number of issues ranging from spectral response to adjustment of lens corrections. There are numerous strange interactions between different functions that end-up sharing the same I/O pins, so work-arounds were needed. It also was necessary to devise ways to handle various higher-level issues such as implementation of a live view and synchronization across cameras. However, the key problems have been resolved with open source software and hardware designs described here.

Digital Library: EI
Published Online: January  2022
  136  50
Image
Pages 242-1 - 242-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

Event sensing is a novel modality which is solely sensitive to changes of information. This redundancy reduction can be utilized to achieve high temporal resolution, reduce power consumption, simplify algorithms etc. The hardware-software co-design of event sensors and algorithms requires early simulation of the sensor system. It has been shown that high-speed video is well suited to derive such event data for temporal contrast based event sensors, but the simulators published so far neglect phenomena such as readout latency or refractory period. This paper presents ongoing modeling activities at OmniVision Technologies.

Digital Library: EI
Published Online: January  2022
  103  62
Image
Pages 256-1 - 256-4,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 7
Abstract

A Low-power and low-noise digital pixel sensor (DPS) is presented in this paper. To design and analyze the random noise (RN) of the developed DPS, especially, we utilize a novel simulationmethod called transient-based AC noise simulation (TBAS) which can effectively help to estimate the noise components of the lowpowered single-slope (SS) analog-to-digital converter (ADC). Based on this noise analysis, the high performance DPS has been successfully designed and demonstrated.

Digital Library: EI
Published Online: January  2022

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]