Regular
analysisAutomotive simulatorADAS (ADVANCED DRIVER ASSISTANCE SYSTEM)autonomous vehicle imaging systemsADAS/AD DomainAutomotive ImagingAutomated MachinesAutonomous Vehicle SystemsADASAutonomous drivingAutomated Driving
Computer visionClosed heightmap estimationcontrast detection probabilityCamera ValidationCareaboutsConvolutionConvolutional Neural Networks (CNN)Convolution Neural NetworkCAMERA
Deep Reinforcement Learningdeep neural networksDriving behaviorsData associationDeep LearningDSP
Environment monitoringExtended Kalman FilterEye tracking
Fixation pattern
Genarative adversarial network
Human-machine InterfacesHyperspectral visionHDR
IMAGE ARTIFACT DETECTIONImage processingIEEE P2020image analysisImage and Vision ProcessorsImage UnderstandingImage QualityimagingImage Processing
Johnson criteria
Kalman Filter
LED flickerloop closure detectionLocalization Mapping & Navigation
MTFMachine learningMulti-sensor Data FusionMulti-pedestrian trackingMachine VisionMAXIMALLY STABLE EXTREMAL REGIONMulti-camera visualizationModulation Transfer Function
OptimizationOnline tracking
Point Spread Function PSFPerspective correct surround viewParticle filteringPerception and Analytics
RAINDROP DETECTIONRADARreal-time pedestrian detectionRandom FernsRobust LocalizationRELIABILITY
SharpnessSubjective and objective visual quality assessmentSystem TopologySensor FusionSensing TechnologySiL/HiLSystem Partitioningsaliency mapSurround view automotive systemssystemScene UnderstandingSimultaneous localization and mappingSENSOR FUSIONSURFACE TENSION
thermal imageTracking-by-detectionTradeoff ParametersTrained Parking
Unscented Kalman FilterUsage Scenarios
Virtual viewVisual SLAMVehicle E/E ArchitectureVirtual Simulation PlatformsVehicle Environment
You only look once (YOLO)
 Filters
Month and year
 
  26  2
Image
Pages 567-1 - 567-6,  © Society for Imaging Science and Technology 2018
Digital Library: EI
Published Online: January  2018
  180  73
Image
Pages 105-1 - 105-10,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

This paper explores the use of existing methods found in image science literature to perform 'first-pass' specification and modeling of imaging systems intended for use in autonomous vehicles. The use of the Johnson Criteria [1] and suggestions for its adaptation to modern systems comprising neural nets or other machine vision techniques is discussed to enable initial selection of field of view, pixel size and sensor format. More sophisticated Modulation Transfer Function (MTF) modeling is detailed to estimate the frequency response of the system, including lower bounds due to phase effects between the sampling grid and scene [2]. A signal model is then presented accounting for illumination spectra, geometry and light level, scene reflectance, lens geometry and transmission, and sensor quantum efficiency to yield electrons per lux second per pixel in the plane of the sensor. A basic noise model is outlined and an information theory based approach to camera ranking presented. Thoughts on progressing the above to look at color differences between objects are mentioned. The results from the models are used in examples to demonstrate preliminary ranking of differently specified systems in various imaging conditions.

Digital Library: EI
Published Online: January  2018
  106  37
Image
Pages 146-1 - 146-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

In recent years, the use of LED lighting has become widespread in the automotive environment, largely because of their high energy efficiency, reliability, and low maintenance costs. There has also been a concurrent increase in the use and complexity of automotive camera systems. To a large extent, LED lighting and automotive camera technology evolved separately and independently. As the use of both technologies has increased, it has become clear that LED lighting poses significant challenges for automotive imaging i.e. so-called "LED flicker". LED flicker is an artifact observed in digital imaging where an imaged light source appears to flicker, even though the light source appears constant to a human observer. This paper defines the root cause and manifestations of LED flicker. It defines the use cases where LED flicker occurs, and related consequences. It further defines a test methodology and metrics for evaluating an imaging systems susceptibility to LED flicker.

Digital Library: EI
Published Online: January  2018
  39  9
Image
Pages 147-1 - 147-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Surround view camera systems are nowadays commonly provided/offered by most of the car manufactures. Currently, a considerable number of different multi-camera visualization systems exist in the automotive sector, which are difficult to evaluate and compare in terms of visual performance. This is mainly due to the lack of standardized approaches for evaluation, unpredictable 3D input content, un-predictable outdoor conditions, non-standardized display units as well as visual quality requirements that are not clearly identified by the car manufactures. Recently there has been IEEE-P2020 initiative established that concerns standards for image quality for automotive systems. In this paper, we address the problem of reliably evaluating multicamera automotive surround view systems in terms of visual quality. We propose a test methodology and an efficient test system platform with a video playback system and real camera input images captured from the vehicle, which enables visual quality monitoring subjectively on the head unit display and objectively by the proposed objective quality metrics.

Digital Library: EI
Published Online: January  2018
  248  113
Image
Pages 148-1 - 148-14,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

The recent established goal of autonomous driving cars, motivates the discussion about safety relevant performance parameters in the automotive industry. The majority of currently accepted key performance indicators (KPIs) do not allow a good prediction over the system performance along a safety relevant critical effect chain. A breakdown of the functional system down to component and sensor levels makes this KPI problem evident. We will present a methodology for sensor performance prediction by a probabilistic approach, on the basis of significant critical use cases. As a result the requirement engineering along the effect chain especially for safety relevant processes appears transparent and understandable. Specific examples from the field of image quality will concentrate on the proposal of a new KPI, the contrast detection probability (CDP). This proposal is currently under discussion within the P2020 work group on automotive image quality and challenges known KPIs such as SNR, especially with respect their effects on automotive use cases.

Digital Library: EI
Published Online: January  2018
  95  25
Image
Pages 149-1 - 149-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Training autonomous vehicles requires lots of driving sequences in all situations[1]. Typically a simulation environment (software-in-the-loop, SiL) accompanies real-world test drives to systematically vary environmental parameters. A missing piece in the optical model of those SiL simulations is the sharpness, given in linear system theory by the point-spread function (PSF) of the optical system. We present a novel numerical model for the PSF of an optical system that can efficiently model both experimental measurements and lens design simulations of the PSF. The numerical basis for this model is a non-linear regression of the PSF with an artificial neural network (ANN). The novelty lies in the portability and the parameterization of this model, which allows to apply this model in basically any conceivable optical simulation scenario, e.g. inserting a measured lens into a computer game to train autonomous vehicles. We present a lens measurement series, yielding a numerical function for the PSF that depends only on the parameters defocus, field and azimuth. By convolving existing images and videos with this PSF model we apply the measured lens as a transfer function, therefore generating an image as if it were seen with the measured lens itself. Applications of this method are in any optical scenario, but we focus on the context of autonomous driving, where quality of the detection algorithms depends directly on the optical quality of the used camera system. With the parameterization of the optical model we present a method to validate the functional and safety limits of camera-based ADAS based on the real, measured lens actually used in the product.

Digital Library: EI
Published Online: January  2018
  23  2
Image
Pages 163-1 - 163-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Autonomous driving has the potential to positively impact the daily life of humans. Techniques such as imaging processing, computer vision, and remote sensing have been highly involved in creating reliable and secure robotic cars. Conversely, the interaction between human perception and autonomous driving has not been deeply explored. Therefore, the analysis of human perception during the cognitive driving task, while making critical driving decisions, may provide great benefits for the study of autonomous driving. To achieve such an analysis, eye movement data of human drivers was collected with a mobile eye-tracker while driving in a automotive simulator built around an actual physical car, that mimics a realistic driving experience. Initial experiments have been performed to investigate the potential correlation between the driving behaviors and fixation patterns of the human driver.

Digital Library: EI
Published Online: January  2018
  37  5
Image
Pages 164-1 - 164-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Autonomous driving is an active area of research in the automotive market. The development of automated functions such as highway driving, autonomous parking etc. requires a robust platform for development and safety qualification of the system. In this context, virtual simulation platforms are key enablers for development of algorithms, software and hardware components. In this paper, we discuss multiple virtual simulation platforms such as open source car simulators, commercial automotive vendors and gaming platforms that are available in the market. We discuss the key factors that make the virtual platform suitable for automated driving function development. Based on the analysis of various simulation platforms, we end the paper with a proposal of two stage approach for the automated driving functionality development.

Digital Library: EI
Published Online: January  2018
  62  7
Image
Pages 256-1 - 256-5,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

Automated Driving requires fusing information from multitude of sensors such as cameras, radars, lidars mounted around car to handle various driving scenarios e.g. highway, parking, urban driving and traffic jam. Fusion also enables better functional safety by handling challenging scenarios such as weather conditions, time of day, occlusion etc. The paper gives an overview of the popular fusion techniques namely Kalman filters and its variation e.g. Extended Kalman filters and Unscented Kalman filters. The paper proposes choice of fusing techniques for given sensor configuration and its model parameters. The second part of paper focuses on efficient solution for series production using embedded platform using Texas Instrument's TDAx Automotive SoC. The performance is benchmarked separately for "predict" and "update" phases on for different sensor modalities. For typical L3/L4 automated driving consisting of multiple cameras, radars and lidars, fusion can supported in real time by single DSP using proposed techniques enabling cost optimized solution.

Digital Library: EI
Published Online: January  2018
  295  55
Image
Pages 257-1 - 257-6,  © Society for Imaging Science and Technology 2018
Volume 30
Issue 17

In autonomous vehicle systems, sensing the surrounding environment is important to an intelligent vehicle's making the right decision about the action. Understanding the neighboring environment from sensing data can enable the vehicle to be aware of other moving objects nearby (e.g., vehicles or pedestrians) and therefore avoid collisions. This local situational awareness mostly depends on extracting information from a variety of sensors (e.g. camera, LIDAR, RADAR) each of which has its own operating conditions (e.g., lighting, range, power). One of the open issues in the reconstruction and understanding of the environment of autonomous vehicle is how to fuse locally sensed data to support a specific decision task such as vehicle detection. In this paper, we study the problem of fusing data from camera and LIDAR sensors and propose a novel 6D (RGB+XYZ) data representation to support visual inference. This work extends previous Position and Intensity-included Histogram of Oriented Gradient (PIHOG or πHOG) from color space to the proposed 6D space, which targets at achieving more reliable vehicle detection than single-sensor approach. Our experimental result have validated the effectiveness of the proposed multi-sensor data fusion approach - i.e., it achieves the detection accuracy of 73% on the challenging KITTI dataset.

Digital Library: EI
Published Online: January  2018

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]