The IEEE P2020 standard addresses fundamental image quality attributes that are specifically relevant to cameras in automotive imaging systems. The Noise standard in IEEE P2020 is mostly based on existing standards on noise in digital cameras. However, it adjusts test conditions and procedures to make them more suitable for cameras for automotive applications, such as use of fisheye lenses, 16-32 bit data format in operation in high dynamic range (HDR) mode, HDR scenes, extended temperature range, and near-infrared imaging. The work presents methodology, procedures and experimental results that demonstrate extraction of camera characteristics from videos of HDR and other test charts that are recorded in raw format, including dark and photo signals, temporal noise, fixed-pattern noise, signal-to-noise ratio curves, photon transfer curve, transaction factor and effective full well capacity. The work also presents methodology and experimental results for characterization of camera noise in the dark array and signal falloff.
Recently, non-RGB image sensors gain a traction in the automotive applications for high sensitivity camera system. Some color filter combinations have been proposed, such as RCCB, RCCG, RYYCy, etc. However, some of them have a difficulty to differentiate Yellow and Red traffic signals. This paper proposes the solution to that issue by shifting Red color filter edge. The differentiation performance was verified by the segmentation in the color space using the traffic signal spectrum database we built up. This result was also checked with image data by using a hyperspectral camera simulation. For the SNR comparison between those color filter options, we propose SNR10-based scheme for an apple-to-apple comparison and discuss on the overall pros / cons.
Dynamic vision sensors are growing in popularity for Computer Vision and moving scenes: its output is a stream of events reflecting temporal lighting changes, instead of absolute values. One of its advantages is fast detection of events, which are asynchronously read as spikes. However, high event data throughput implies an increasing workload for the read-out. That can lead to data loss or to prohibitively large power consumption for constrained devices. This work presents a scheme to reduce data throughput by using near pixel pre-processing: less events codifying temporal change and intensity slope magnitude are generated. Our simulated example depicts a data throughput reduction down to 14 %, in the case of the most aggressive version of our approach.