Regular
Advanced Driver Assist SystemsAdversarial Attacksautonomous vehicles
CMOS Image SensorsContrastive LearningComputer vision
Driver MonitoringDeep Learningdeep learning
Imaging System OptimizationImage Noise Reconstruction
LED flickerLane Modelling LIDAR
machine learningMetrology
perception
Space-Variancesensors and processorsSimultaneous localization and mapping
 
Convolutional Neural Networks Frameworks Deep learning image perturbations Multi lane detection Autonomous vehicle Signal Falloff time of flight Sensor Fusion Driving Automation Signal-to-Noise Ratio Multitask Learning Spectrally extended imaging Neural net accelerators Conditional Random Fields 3D sensor Poisson-Gaussian Noise Noise VIS-NIR Imaging Variance-stabilizing Transformation Near Infrared Imaging SNR10 Traffic light differentiation Visual noise ADAS Calibration Environment Simulation Certification Camera Noise AV Object Detection 3D camera optical phased array silicon photonics Photon Transfer Curve Quality Assurance Scanner Vision processors Open Source RGBIR Sensor Trustworthiness Principal Component Analysis Quantifying Quality In-Cabin Monitoring Optical Model IEEE P2020 Depth sensing Autonomous Driving Self-supervised learning Automotive IoU Image Quality Convolutional neural network Fisheye images System Simulation Computer Vision Data Augmentation Light spectrum analysis IMU TDA Processor Color space MEMS Non-RGB Data collection
 Filters
Month and year
 
  61  19
Image
Page ,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

Advancements in sensing, computing, image processing, and computer vision technologies are enabling unprecedented growth and interest in autonomous vehicles and intelligent machines, from self-driving cars to unmanned drones, to personal service robots. These new capabilities have the potential to fundamentally change the way people live, work, commute, and connect with each other, and will undoubtedly provoke entirely new applications and commercial opportunities for generations to come. The main focus of AVM is perception. This begins with sensing. While imaging continues to be an essential emphasis in all EI conferences, AVM also embraces other sensing modalities important to autonomous navigation, including radar, LiDAR, and time of flight. Realization of autonomous systems also includes purpose-built processors, e.g., ISPs, vision processors, DNN accelerators, as well core image processing and computer vision algorithms, system design and architecture, simulation, and image/video quality. AVM topics are at the intersection of these multi-disciplinary areas. AVM is the Perception Conference that bridges the imaging and vision communities, connecting the dots for the entire software and hardware stack for perception, helping people design globally optimized algorithms, processors, and systems for intelligent “eyes” for vehicles and machines.

Digital Library: EI
Published Online: January  2022
  162  47
Image
Pages 101-1 - 101-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

Since it is essential for Computer Vision systems to reliably perform in safety-critical applications such as autonomous vehicles, there is a need to evaluate their robustness to naturally occurring image perturbations. More specifically, the performance of Computer Vision systems needs to be linked to the image quality, which hasn’t received much research attention so far. In fact, aberrations of a camera system are always spatially variable over the Field of View, which may influence the performance of Computer Vision systems dependent on the degree of local aberrations. Therefore, the goal is to evaluate the performance of Computer Vision systems under effects of defocus by taking into account the spatial domain. Large-scale Autonomous Driving datasets are degraded by a parameterized optical model to simulate driving scenes under physically realistic effects of defocus. Using standard evaluation metrics, the Spatial Recall Index (SRI) and the new Spatial Precision Index (SPI), the performance of Computer Vision systems on these degraded datasets are compared with the optical performance of the applied optical model. A correlation could be observed between the spatially varying optical performance and the spatial performance of Instance Segmentation systems.

Digital Library: EI
Published Online: January  2022
  125  53
Image
Pages 108-1 - 108-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

In this paper, we review the LED flicker metrics as defined by the IEEE P2020 working group. The goal of these metrics is to quantify the flicker behaviour of a camera system, to enable engineers to quantify flicker mitigation, and to identify and explore challenging flicker use cases and system limitations. In brief, Flicker Modulation Index quantifies the modulation of a flickering light source, and is particularly useful for quantifying banding effects in rolling shutter cameras. Flicker Detection Index quantifies the ability of a camera system to distinguish a flickering light source from the background signal level. Modulation Mitigation Probably quantifies the ability of a camera system to mitigate modulation of a flickering light source. This paper explores various use cases of flicker, how the IEEE P2020 metrics can be used to quantify camera system performance in these use cases, and discusses measurement and reporting considerations for lab based flicker assessment.

Digital Library: EI
Published Online: January  2022
  365  145
Image
Pages 109-1 - 109-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

The IEEE P2020 standard addresses fundamental image quality attributes that are specifically relevant to cameras in automotive imaging systems. The Noise standard in IEEE P2020 is mostly based on existing standards on noise in digital cameras. However, it adjusts test conditions and procedures to make them more suitable for cameras for automotive applications, such as use of fisheye lenses, 16-32 bit data format in operation in high dynamic range (HDR) mode, HDR scenes, extended temperature range, and near-infrared imaging. The work presents methodology, procedures and experimental results that demonstrate extraction of camera characteristics from videos of HDR and other test charts that are recorded in raw format, including dark and photo signals, temporal noise, fixed-pattern noise, signal-to-noise ratio curves, photon transfer curve, transaction factor and effective full well capacity. The work also presents methodology and experimental results for characterization of camera noise in the dark array and signal falloff.

Digital Library: EI
Published Online: January  2022
  221  51
Image
Pages 110-1 - 110-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

Simulation plays a key role in the development of Advanced Driver Assist Systems (ADAS) and Autonomous Driving (AD) stacks. A growing number of simulation solutions addresses development, test, and validation of these systems at unprecedented scale and with a large variety of features. Transparency with respect to the fitness of features for a given task is often hard to come by, and sorting marketing claims from product performance facts is a challenge. New players – on users’ and vendors’ side – will lead to further diversification. Evolving standards, regulatory requirements, verification and validation practices etc. will add to the list of criteria that might be relevant for identifying the best-fit solution for a given task. There is a need to evaluate and measure a solution’s compliance with these criteria on the basis of objective test scenarios in order to quantitatively compare different simulation solutions. The goal shall be a standardized catalog of tests which simulation solutions have to undergo before they can be considered fit (or certified) for a certain use case. Here, we propose a novel evaluation framework and detailed testing procedure as a first step towards quantifying simulation quality. We will illustrate the use of this method with results from an initial implementation, thereby highlighting the top-level properties Determinism, Real-time Capability, and Standards Compliance. We hope to raise awareness that simulation quality is not a nice-to-have feature but rather a central aspect for the whole spectrum of stakeholders, and that it needs to be quantified for the development of safe autonomous driving.

Digital Library: EI
Published Online: January  2022
  102  29
Image
Pages 116-1 - 116-5,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

In-Cabin Monitoring Systems (ICMS) are functional safety systems designed to monitor driver and/or passengers inside an automobile. DMS (Driver Monitoring System) and OMS (Occupant Monitoring System) are two variants of ICMS. DMS focusses solely on the driver to monitor events like fatigue, inattention and vital health signs. OMS monitor all the occupants, including unattended children. Besides safety, ICMS can also augment security and comfort through driver identification and personalization. In-Cabin monitoring brings a unique set of challenges from an imaging perspective in terms of higher analytics needs, smaller form factor, low light vision and color accuracy. This paper discusses these challenges and provides an efficient implementation on Texas Instrument's TDA2Px automotive processor. The paper also provides details on novel implementation of RGB+IR sensor format processing that commonly used in these systems enabling premium ICMS system using TDA2Px system-on-chip (SoC).

Digital Library: EI
Published Online: January  2022
  121  9
Image
Pages 117-1 - 117-5,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

The combination of simultaneous localization and mapping (SLAM) and frontier exploration enables a robot to traverse and map an unknown area autonomously. Most prior autonomous SLAM solutions utilize information only from depth sensing devices. However, in situations where the main goal is to collect data from auxiliary sensors such as thermal camera, existing approaches require two passes: one pass to create a map of the environment and another to collect the auxiliary data, which is time consuming and energy inefficient. We propose a sensor-aware frontier exploration algorithm that enables the robot to perform map construction and auxiliary data collection in one pass. Specifically, our method uses a real-time ray tracing technique to construct a map that encodes unvisited locations from the perspective of auxiliary sensors rather than depth sensors; this encourages the robot to fully explore those areas to complete the data collection task and map making in one pass. Our proposed exploration framework is deployed on a LoCoBot with the task to collect thermal images from building envelopes. We validate with experiments in both multi-room commercial buildings and cluttered residential buildings. Using a metric that evaluates the coverage of sensor data, our method significantly outperforms the baseline method with a naive SLAM algorithm.

Digital Library: EI
Published Online: January  2022
  116  34
Image
Pages 118-1 - 118-5,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

Automated driving functions, like highway driving and parking assist, are increasingly getting deployed in high-end cars with the goal of realizing self-driving car using Deep learning techniques like convolution neural network (CNN), Transformers. Camera based perception, Driver Monitoring, Driving Policy, Radar and Lidar perception are few of the examples built using DL algorithms in such systems. Traditionally custom software provided by silicon vendors are used to deploy these DL algorithms on devices. This custom software is very optimal for supported features (limited), but these are not flexible enough for evaluating various deep learning model architecture types quickly. In this paper we propose to use various open-source deep learning inference frameworks to quickly deploy any model architecture without any performance/latency impact. We have implemented this proposed solution with three open-source inference frameworks (Tensorflow Lite, TVM/Neo-AI-DLR and ONNX Runtime) on Linux running in ARM.

Digital Library: EI
Published Online: January  2022
  37  8
Image
Pages 126-1 - 126-5,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

Depth sensing technology has become important in a number of consumer, robotics, and automated driving applications. However, the depth maps generated by such technologies today still suffer from limited resolution, sparse measurements, and noise, and require significant post-processing. Depth map data often has higher dynamic range than common 8-bit image data and may be represented as 16-bit values. Deep convolutional neural nets can be used to perform denoising, interpolation and completion of depth maps; however, in practical applications there is a need to enable efficient low-power inference with 8-bit precision. In this paper, we explore methods to process high-dynamic-range depth data using neural net inference engines with 8-bit precision. We propose a simple technique that attempts to retain signal-to-noise ratio in the post-processed data as much as possible and can be applied in combination with most convolutional network models. Our initial results using depth data from a consumer camera device show promise, achieving inference results with 8-bit precision that have similar quality to floating-point processing.

Digital Library: EI
Published Online: January  2022
  104  44
Image
Pages 147-1 - 147-6,  © Society for Imaging Science and Technology 2022
Volume 34
Issue 16
Abstract

Self-supervised learning has been an active area of research in the past few years. Contrastive learning is a type of self-supervised learning method that has achieved a significant performance improvement on image classification task. However, there has been no work done in its application to fisheye images for autonomous driving. In this paper, we propose FisheyePixPro, which is an adaption of pixel level contrastive learning method PixPro \cite{Xie2021PropagateYE} for fisheye images. This is the first attempt to pretrain a contrastive learning based model, directly on fisheye images in a self-supervised approach. We evaluate the performance of learned representations on the WoodScape dataset using segmentation task. Our FisheyePixPro model achieves a 65.78 mIoU score, a significant improvement over the PixPro model. This indicates that pre-training a model on fisheye images have a better performance on a downstream task.

Digital Library: EI
Published Online: January  2022

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]