Regular
Autonomous Vehicle Systemsautonomous vehicle imaging systemsAutomated MachinesAutomotive ImagingAutonomous drivingAutomated DrivingADASanalysisAutomotive simulatorADAS (ADVANCED DRIVER ASSISTANCE SYSTEM)ADAS/AD Domain
contrast detection probabilityConvolutionCareaboutsConvolution Neural NetworkCAMERAConvolutional Neural Networks (CNN)Computer visionClosed heightmap estimationCamera Validation
Driving behaviorsDeep LearningDSPData associationdeep neural networksDeep Reinforcement Learning
Extended Kalman FilterEye trackingEnvironment monitoring
Fixation pattern
Genarative adversarial network
HDRHuman-machine InterfacesHyperspectral vision
Image UnderstandingImage QualityImage Processing imagingIMAGE ARTIFACT DETECTIONIEEE P2020Image processingImage and Vision Processorsimage analysis
Johnson criteria
Kalman Filter
Localization Mapping & NavigationLED flickerloop closure detection
Machine VisionMulti-camera visualizationModulation Transfer FunctionMAXIMALLY STABLE EXTREMAL REGIONMTFMachine learningMulti-sensor Data FusionMulti-pedestrian tracking
Online trackingOptimization
Point Spread Function PSFPerspective correct surround viewParticle filteringPerception and Analytics
Random FernsRobust LocalizationRELIABILITYRADARRAINDROP DETECTIONreal-time pedestrian detection
Surround view automotive systemssaliency mapsystemSENSOR FUSIONSURFACE TENSIONSimultaneous localization and mappingScene UnderstandingSharpnessSystem TopologySubjective and objective visual quality assessmentSensing TechnologySensor FusionSiL/HiLSystem Partitioning
thermal imageTracking-by-detectionTrained ParkingTradeoff Parameters
Usage ScenariosUnscented Kalman Filter
Vehicle E/E ArchitectureVirtual Simulation PlatformsVehicle EnvironmentVisual SLAMVirtual view
You only look once (YOLO)
 Filters
Month and year
 
  14  2
Image
Pages 567-1 - 567-6,  © Society for Imaging Science and Technology 2018
Digital Library: EI
Published Online: January  2018
  101  47
Image
Pages 105-1 - 105-10,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

This paper explores the use of existing methods found in image science literature to perform 'first-pass' specification and modeling of imaging systems intended for use in autonomous vehicles. The use of the Johnson Criteria [1] and suggestions for its adaptation to modern systems comprising neural nets or other machine vision techniques is discussed to enable initial selection of field of view, pixel size and sensor format. More sophisticated Modulation Transfer Function (MTF) modeling is detailed to estimate the frequency response of the system, including lower bounds due to phase effects between the sampling grid and scene [2]. A signal model is then presented accounting for illumination spectra, geometry and light level, scene reflectance, lens geometry and transmission, and sensor quantum efficiency to yield electrons per lux second per pixel in the plane of the sensor. A basic noise model is outlined and an information theory based approach to camera ranking presented. Thoughts on progressing the above to look at color differences between objects are mentioned. The results from the models are used in examples to demonstrate preliminary ranking of differently specified systems in various imaging conditions.

Digital Library: EI
Published Online: January  2018
  83  30
Image
Pages 146-1 - 146-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

In recent years, the use of LED lighting has become widespread in the automotive environment, largely because of their high energy efficiency, reliability, and low maintenance costs. There has also been a concurrent increase in the use and complexity of automotive camera systems. To a large extent, LED lighting and automotive camera technology evolved separately and independently. As the use of both technologies has increased, it has become clear that LED lighting poses significant challenges for automotive imaging i.e. so-called "LED flicker". LED flicker is an artifact observed in digital imaging where an imaged light source appears to flicker, even though the light source appears constant to a human observer. This paper defines the root cause and manifestations of LED flicker. It defines the use cases where LED flicker occurs, and related consequences. It further defines a test methodology and metrics for evaluating an imaging systems susceptibility to LED flicker.

Digital Library: EI
Published Online: January  2018
  24  8
Image
Pages 147-1 - 147-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Surround view camera systems are nowadays commonly provided/offered by most of the car manufactures. Currently, a considerable number of different multi-camera visualization systems exist in the automotive sector, which are difficult to evaluate and compare in terms of visual performance. This is mainly due to the lack of standardized approaches for evaluation, unpredictable 3D input content, un-predictable outdoor conditions, non-standardized display units as well as visual quality requirements that are not clearly identified by the car manufactures. Recently there has been IEEE-P2020 initiative established that concerns standards for image quality for automotive systems. In this paper, we address the problem of reliably evaluating multicamera automotive surround view systems in terms of visual quality. We propose a test methodology and an efficient test system platform with a video playback system and real camera input images captured from the vehicle, which enables visual quality monitoring subjectively on the head unit display and objectively by the proposed objective quality metrics.

Digital Library: EI
Published Online: January  2018
  159  76
Image
Pages 148-1 - 148-14,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

The recent established goal of autonomous driving cars, motivates the discussion about safety relevant performance parameters in the automotive industry. The majority of currently accepted key performance indicators (KPIs) do not allow a good prediction over the system performance along a safety relevant critical effect chain. A breakdown of the functional system down to component and sensor levels makes this KPI problem evident. We will present a methodology for sensor performance prediction by a probabilistic approach, on the basis of significant critical use cases. As a result the requirement engineering along the effect chain especially for safety relevant processes appears transparent and understandable. Specific examples from the field of image quality will concentrate on the proposal of a new KPI, the contrast detection probability (CDP). This proposal is currently under discussion within the P2020 work group on automotive image quality and challenges known KPIs such as SNR, especially with respect their effects on automotive use cases.

Digital Library: EI
Published Online: January  2018
  64  18
Image
Pages 149-1 - 149-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Training autonomous vehicles requires lots of driving sequences in all situations[1]. Typically a simulation environment (software-in-the-loop, SiL) accompanies real-world test drives to systematically vary environmental parameters. A missing piece in the optical model of those SiL simulations is the sharpness, given in linear system theory by the point-spread function (PSF) of the optical system. We present a novel numerical model for the PSF of an optical system that can efficiently model both experimental measurements and lens design simulations of the PSF. The numerical basis for this model is a non-linear regression of the PSF with an artificial neural network (ANN). The novelty lies in the portability and the parameterization of this model, which allows to apply this model in basically any conceivable optical simulation scenario, e.g. inserting a measured lens into a computer game to train autonomous vehicles. We present a lens measurement series, yielding a numerical function for the PSF that depends only on the parameters defocus, field and azimuth. By convolving existing images and videos with this PSF model we apply the measured lens as a transfer function, therefore generating an image as if it were seen with the measured lens itself. Applications of this method are in any optical scenario, but we focus on the context of autonomous driving, where quality of the detection algorithms depends directly on the optical quality of the used camera system. With the parameterization of the optical model we present a method to validate the functional and safety limits of camera-based ADAS based on the real, measured lens actually used in the product.

Digital Library: EI
Published Online: January  2018
  7  2
Image
Pages 163-1 - 163-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Autonomous driving has the potential to positively impact the daily life of humans. Techniques such as imaging processing, computer vision, and remote sensing have been highly involved in creating reliable and secure robotic cars. Conversely, the interaction between human perception and autonomous driving has not been deeply explored. Therefore, the analysis of human perception during the cognitive driving task, while making critical driving decisions, may provide great benefits for the study of autonomous driving. To achieve such an analysis, eye movement data of human drivers was collected with a mobile eye-tracker while driving in a automotive simulator built around an actual physical car, that mimics a realistic driving experience. Initial experiments have been performed to investigate the potential correlation between the driving behaviors and fixation patterns of the human driver.

Digital Library: EI
Published Online: January  2018
  23  4
Image
Pages 164-1 - 164-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Autonomous driving is an active area of research in the automotive market. The development of automated functions such as highway driving, autonomous parking etc. requires a robust platform for development and safety qualification of the system. In this context, virtual simulation platforms are key enablers for development of algorithms, software and hardware components. In this paper, we discuss multiple virtual simulation platforms such as open source car simulators, commercial automotive vendors and gaming platforms that are available in the market. We discuss the key factors that make the virtual platform suitable for automated driving function development. Based on the analysis of various simulation platforms, we end the paper with a proposal of two stage approach for the automated driving functionality development.

Digital Library: EI
Published Online: January  2018
  34  6
Image
Pages 256-1 - 256-5,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Automated Driving requires fusing information from multitude of sensors such as cameras, radars, lidars mounted around car to handle various driving scenarios e.g. highway, parking, urban driving and traffic jam. Fusion also enables better functional safety by handling challenging scenarios such as weather conditions, time of day, occlusion etc. The paper gives an overview of the popular fusion techniques namely Kalman filters and its variation e.g. Extended Kalman filters and Unscented Kalman filters. The paper proposes choice of fusing techniques for given sensor configuration and its model parameters. The second part of paper focuses on efficient solution for series production using embedded platform using Texas Instrument's TDAx Automotive SoC. The performance is benchmarked separately for "predict" and "update" phases on for different sensor modalities. For typical L3/L4 automated driving consisting of multiple cameras, radars and lidars, fusion can supported in real time by single DSP using proposed techniques enabling cost optimized solution.

Digital Library: EI
Published Online: January  2018
  175  39
Image
Pages 257-1 - 257-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

In autonomous vehicle systems, sensing the surrounding environment is important to an intelligent vehicle's making the right decision about the action. Understanding the neighboring environment from sensing data can enable the vehicle to be aware of other moving objects nearby (e.g., vehicles or pedestrians) and therefore avoid collisions. This local situational awareness mostly depends on extracting information from a variety of sensors (e.g. camera, LIDAR, RADAR) each of which has its own operating conditions (e.g., lighting, range, power). One of the open issues in the reconstruction and understanding of the environment of autonomous vehicle is how to fuse locally sensed data to support a specific decision task such as vehicle detection. In this paper, we study the problem of fusing data from camera and LIDAR sensors and propose a novel 6D (RGB+XYZ) data representation to support visual inference. This work extends previous Position and Intensity-included Histogram of Oriented Gradient (PIHOG or πHOG) from color space to the proposed 6D space, which targets at achieving more reliable vehicle detection than single-sensor approach. Our experimental result have validated the effectiveness of the proposed multi-sensor data fusion approach - i.e., it achieves the detection accuracy of 73% on the challenging KITTI dataset.

Digital Library: EI
Published Online: January  2018

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]