Regular
APS defects ratesADC and other Image Sensor Blocksactive pixel sensor APSauto white balance
Colour Filtercamera calibrationComputer VisionCalibrationColour Conversioncameracorrelated multiple samplingCorrelated Multiple SamplingColourColour MatchingCorrelated Double SamplingCMOS image sensorCondition-based Maintenance
Dynamic RangeDefect Detectiondynamic rangedevice integrationDSPdepth reconstruction
Embedded Systems
fish-eye cameraFPGA
HDRhot pixel development
image enhancementImaging SystemsImage processingimage processingimager defect detectionImage SensorsISPImage sensorimage degradation
laserlow noiselight fieldLow light enhancement
Multispectral Imagingmultispectral sensor
noiseNeural Network
optical system
PhotodiodespixelProcessesPixels
Real Time Application
stabilizationsolid state scannerSpectral reconstructionsingle photon avalanche diodesensorssocSPAD Arrayscene change detectionSmart Image Sensors
TDA3xtime-of-flighttone mapping
videovignetting
Wiener Inverse
1/f noise
3D reconstruction3D imaging
8K UHDTV
 Filters
Month and year
 
  9  0
Image
Pages A09-1 - A09-8,  © Society for Imaging Science and Technology 2019
Digital Library: EI
Published Online: January  2019
  21  2
Image
Pages 357-1 - 357-5,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

A range image of a scene is produced with a solid-state time of- flight system that uses active illumination and a time-gated single photon avalanche diode (SPAD) array. The distance to a target from the imager is measured by delaying the time gate in small steps and counting the photons of the pixels in each delay step in successive measurements. To achieve a high frame rate, the number of delay steps needed is minimized by limiting the scanning of the depth only to the range of interest. To be able to measure scenes with objects in different ranges, the array has been divided into groups of pixels with independently controlled time gating. This paper demonstrates an algorithm that can be used to control the time gating of the pixel groups in the sensor array to achieve depth maps of the scene with the time-gated SPAD array in real time and at a 70 Hz frame rate.

Digital Library: EI
Published Online: January  2019
  26  4
Image
Pages 359-1 - 359-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

“Hot Pixels” defects in digital imaging sensors accumulate as the camera ages over time at a rate that is highly dependent on pixel size. Previously we developed an empirical formula that projects hot pixel defect growth rates in terms of defect density (defects/year/mm2) via a power law, with the inverse of the pixel size raised to the power of ˜3, multiplied by the square root of the ISO (gain) We show in this paper that this increasing defect rate results in a higher probability that two defects will occur within a 5x5 pixel box. The demosaicing and JPEG image compression algorithms may greatly amplify the impact of two defective pixels within a 5x5 pixel box, spreading it into a 16x16 pixel box thus resulting in a very noticeable image degradation. We develop both analytical (generalized birthday problem formula) and Monte Carlo simulations to estimate the number of hot pixels required to achieve a given probability of having two defective pixels occur within a 5x5 square. For a 20 Mpix DSLR camera (360 mm2) only 128 hot pixels generate a 4% probability of two such defective pixels, which for pixels of size 4 μm may occur in 1.4 years at ISO 6400, and in 3.2 years at ISO 3200.

Digital Library: EI
Published Online: January  2019
  15  3
Image
Pages 360-1 - 360-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

In this paper, we describe a novel method for image-based rail defect detections for railroad maintenance. While we developed the framework to handle a broad range of defect types, in this paper we illustrate the approach on the specific example of detecting cracks located on fishplates connecting rails in images. Our algorithm pipeline consists of three major components: a preprocessing and localization module, a classification module, and an on-line retraining module. The pipeline first performs preprocessing tasks such as intensity normalization or snow pixel modification to better prepare the images, and then localizes various candidate regions of interest (ROIs) where the defects of interest may reside. The resulting candidate ROIs are then analyzed by trained classifier(s) to determine whether the defect is present. The classifiers are trained off-line using labeled training samples. While the system is being used in the real-world, more samples can be gathered. This gives us opportunity to refine and improve the initial models. Experimental results show the effectiveness of our algorithm pipeline for detecting fishplate cracks as well as several other defects of interest.

Digital Library: EI
Published Online: January  2019
  7  2
Image
Pages 361-1 - 361-4,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

Images captured at low light suffers from underexposure and noise. These poor-quality images act as hindrance for computer vision algorithms as well as human vision. While this problem can be solved by increasing the exposure time, it also introduces new problems. In applications like ADAS, where there are fast moving objects in the scene, increasing the exposure time will cause motion blur. In applications, that demand higher frame rate, increasing the exposure time is not an option. Increasing the gain will result in noise as well as saturation of pixels at higher end. So, a real time scene adaptive algorithm is required for the enhancement of low light images. We propose a real time low light enhancement algorithm with more detail preservation compared to existing global based enhancement algorithms for low cost embedded platforms. The algorithm is integrated to image signal processing pipeline of TI’s TDA3x and achieved ˜50fps on c66x DSP for HD resolution video captured from Omnivision’s OV10640 Bayer image sensor.

Digital Library: EI
Published Online: January  2019
  21  3
Image
Pages 362-1 - 362-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

Nonlinear CMOS image sensor (CIS) technology is capable of high/wide dynamic range imaging at high frame rates without motion artifacts. However, unlike with linear CIS technology, there is no generic method for colour correction of nonlinear CIS technology. Instead, there are specific methods for specific nonlinear responses, e.g., the logarithmic response, that are based on legacy models. Inspired by recent work on generic methods for fixed pattern noise and photometric correction of nonlinear sensors, which depend only on a reasonable assumption of monotonicity, this paper proposes and validates a generic method for colour correction of nonlinear sensors. The method is composed of a nonlinear colour correction, which employs cubic Hermite splines, followed by a linear colour correction. Calibration with a colour chart is required to estimate the relevant parameters. The proposed method is validated, through simulation, using a combination of experimental data, from a monochromatic logarithmic CIS, and spectral data, reported in the literature, of actual colour filter arrays and target colour patches.

Digital Library: EI
Published Online: January  2019
  14  4
Image
Pages 363-1 - 363-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

A system-on-chip (SoC) platform having a dual-core microprocessor (μP) and a field-programmable gate array (FPGA), as well as interfaces for sensors and networking, is a promising architecture for edge computing applications in computer vision. In this paper, we consider a case study involving the low-cost Zynq- 7000 SoC, which is used to implement a three-stage image signal processor (ISP), for a nonlinear CMOS image sensor (CIS), and to interface the imaging system to a network. Although the highdefinition imaging system operates efficiently in hard real time, by exploiting an FPGA implementation, it sends information over the network on demand only, by exploiting a Linux-based μP implementation. In the case study, the Zynq-7000 SoC is configured in a novel way. In particular, to guarantee hard real time performance, the FPGA is always the master, communicating with the μP through interrupt service routines and direct memory access channels. Results include a validation of the overall system, using a simulated CIS, and an analysis of the system complexity. On this low-cost SoC, resources are available for significant additional complexity, to integrate a computer vision application, in future, with the nonlinear CMOS imaging system.

Digital Library: EI
Published Online: January  2019
  15  5
Image
Pages 367-1 - 367-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

This study investigated the noise suppression effect of multiple sampling applied to a 3-stage pipeline analog-to-digital converter (ADC) in a 33-megapixel, 120-fps 1.25-in CMOS image sensor. The 3-stage pipeline ADC is composed of folding-integration (FI), cyclic, and successive approximation register ADCs, and the multiple sampling for noise suppression is implemented in the FI ADC. The sampling number M is limited by the conversion interval of the FI ADC and the maximum sampling number is M=6 at the 120-fps operation. To investigate the noise suppression effect of 120-fps operation, we measured the random noise of the pixel readout circuit to the sampling number M and compared with theoretical calculations. As a result, we confirmed that the measurement result corresponds reasonably well with the calculated result and the sampling number M = 6 is effective for noise suppression. Furthermore, the calculations revealed that the influence of 1/f noise of the source follower is dominant on the noise performance.

Digital Library: EI
Published Online: January  2019
  16  8
Image
Pages 368-1 - 368-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

Correlated Multiple Sampling (CMS), which is an extension of Correlated Double Sampling (CDS), is a very popular noise reduction technique used in the readout chain of image sensors. It has been analyzed in the literature, showing that, with an increasingly number M of samples, the total noise tends to a limit value dominated by the pixel 1/f noise. Nevertheless, this approach fails to explain why, in some cases, the total noise measurement may reach a minimum before, against all odds, finally growing with M. This paper shows that an explanation can be found if the pixel noise Power Spectral Density (PSD) varies in 1/fE with a frequency exponent E > 1 instead of E=1.

Digital Library: EI
Published Online: January  2019
  14  3
Image
Pages 371-1 - 371-9,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 9

A color image is the result of a very complex physical process. This process involves both light reflectance due to the surface of the object and the sensor. The sensor is the human eye or the image acquisition system. To avoid metamerism phenomena and to give better rendering to the color images resulting from the synthesis, it is sometimes necessary to work in the spectral field. Now, we meet two classes of methods that enable the closest spectral image to be produced from color images. The first method uses circular and exponential functions. The second method uses the Penrose inverse or the Wiener inverse. In this article, we first of all describe the two methods used in a variety of fields from image synthesis to colorimetry not to forget satellite imagery. We then propose a new method linked with the neural network so as to improve the first two approximation methods. This new method can also be used for calibrating most of color digitization systems and sub wavelengths color array filters.

Digital Library: EI
Published Online: January  2019

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]