Regular
Appearance-basedADASActivity Classificationautonomous vehiclesAutonomous Drivingadvanced driver assistance systemAdvanced Driver Assistance SystemAutomotiveAutonomous Vehiclesatmospheric point spread functionatmospheric scattering modelautonomous vehicleAutonomous Vehicle
BandingBlindnessBehavior Prediction
Convolutional Neural Networks (CNNs)Camera PerformancecameraContinuous TrackingContrast Detection ProbabilitycognitionColor Separation Probabilitycontrast detection probabilityComputer Vision
Dynamic Rangedetection theoryData associationDeep LearningDriver AssistanceDriver Monitoring SystemDead Leaves
End-to-End
Flicker measurementFusion algorithms
Gaze EstimationGeometric Calibration
human factors
Image QualityImage Sensorsimage qualityIEEE-P2020ISO12233Image SubtractionIEEE P2020 Automotive Image Quality Standardsideal observerISO19567
Joint Attention for Autonomous Driving (JAAD)
KPI
LIDARLED FlickerLiDARlaser speckle
Machine LearningMultitask learningMulti-pedestrian tracking (MPT)MappingMultitask Learningmultitaskmetrologymobile laboratoryMTFmodulation transfer functionMud
Night Scenesnoise power spectrumnoise equivalent quanta
Object detectionObject DetectionObject Tracking
PerceptionP2020Pedestrian Intentionperception
receiver operating characteristicRadarroad maintenanceRainRobotics
signal detection theorySunglare detectionSemi-automatic Video AnnotationSurveillanceSingle image haze removalsensor fusionSlanted EdgeSensors and ProcessorsSensor SimulationSSIMSiamese Random Forest (SRF)Semantic SegmentationSFRSensor FusionStereo vision
Tracking-by-detectionTrajectory PredictionTemporal SamplingTraffic VideosTexture losstransmission map
User Interface
Visual Object Trackingvehicle controlVulnerable Road Users (VRU)VisualizationVisibilityValidation
weather
You only look once v3 (YOLOv3)
3D imaging3d
 Filters
Month and year
 
  99  30
Image
Pages 1-1 - 1-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

The introduction of pulse width modulated LED lighting in automotive applications has created the phenomenon of LED flicker. In essence, LED flicker is an imaging artifact, whereby a light source will appear to flicker when image by a camera system, even though the light will appear constant to a human observer. The implications of LED flicker vary, depending on the imaging application. In some cases, it can simply degrade image quality by causing annoying flicker to a human observer. However, LED flicker has the potential to significantly impact the performance of critical autonomous driving functions. In this paper, the root cause of LED flicker is reviewed, and its impact on automotive use cases is explored. Guidelines on measurement and assessment of LED flicker are also provided.

Digital Library: EI
Published Online: January  2020
  25  4
Image
Pages A16-1 - A16-7,  © Society for Imaging Science and Technology 2020
Digital Library: EI
Published Online: January  2020
  143  54
Image
Pages 19-1 - 19-10,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

In this paper, we present an overview of automotive image quality challenges and link them to the physical properties of image acquisition. This process shows that the detection probability based KPIs are a helpful tool to link image quality to the tasks of the SAE classified supported and automated driving tasks. We develop questions around the challenges of the automotive image quality and show that especially color separation probability (CSP) and contrast detection probability (CDP) are a key enabler to improve the knowhow and overview of the image quality optimization problem. Next we introduce a proposal for color separation probability as a new KPI which is based on the random effects of photon shot noise and the properties of light spectra that cause color metamerism. This allows us to demonstrate the image quality influences related to color at different stages of the image generation pipeline. As a second part we investigated the already presented KPI Contrast Detection Probability and show how it links to different metrics of automotive imaging such as HDR, low light performance and detectivity of an object. As conclusion, this paper summarizes the status of the standardization status within IEEE P2020 of these detection probability based KPIs and outlines the next steps for these work packages.

Digital Library: EI
Published Online: January  2020
  42  8
Image
Pages 38-1 - 38-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

High-frequency flickering light sources such as pulse-width modulated LEDs can cause image sensors to record incorrect levels. We describe a model with a loose set of assumptions (encompassing multi-exposure HDR schemes) which can be used to define the Flicker Signal, a continuous function of time based on the phase relationship between the light source and exposure window. Analysis of the shape of this signal yields a characterization of the camera’s response to a flickering light source–typically seen as an undesirable susceptibility–under a given set of parameters. Flicker Signal calculations are made on discrete samplings measured from image data. Sampling the signal is difficult, however, because it is a function of many parameters, including properties of the light source (frequency, duty cycle, intensity) and properties of the imaging system (exposure scheme, frame rate, row readout time). Moreover, there are degenerate scenarios where sufficient sampling is difficult to obtain. We present a computational approach for determining the evidence (region of interest, duration of test video) necessary to get coverage of this signal sufficient for characterization from a practical test lab setup.

Digital Library: EI
Published Online: January  2020
  39  19
Image
Pages 40-1 - 40-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

Contrast detection probability (CDP) is proposed as an IEEE P2020 metric to predict camera performance intended for computer vision tasks for autonomous vehicles. Its calculation involves comparing combinations of pixel values between imaged patches. Computation of CDP for all meaningful combinations of m patches involves approximately 3/2(m2-m).n4 operations, where n is the length of one side of the patch in pixels. This work presents a method to estimate Weber contrast based CDP based on individual patch statistics and thus reduces to computation to approximately 4n2m calculations. For 180 patches of 10×10 pixels this is a reduction of approximately 6500 times and for 180 25×25 pixel patches, approximately 41000. The absolute error in the estimated CDP is less than 0.04 or 5% where the noise is well described by Gaussian statistics. Results are compared for simulated patches between the full calculation and the fast estimate. Basing the estimate of CDP on individual patch statistics, rather than by a pixel-to-pixel comparison facilitates the prediction of CDP values from a physical model of exposure and camera conditions. This allows Weber CDP behavior to be investigated for a wide variety of conditions and leads to the discovery that, for the case where contrast is increased by decreasing the tone value of one patch and therefore increasing noise as contrast increases, there exists a maxima which yields identical Weber CDP values for patches of different nominal contrast. This means Weber CDP is predicting the same detection performance for patches of different contrast.

Digital Library: EI
Published Online: January  2020
  167  33
Image
Pages 41-1 - 41-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

Many of the metrics developed for informational imaging are useful in automotive imaging, since many of the tasks – for example, object detection and identification – are similar. This work discusses sensor characterization parameters for the Ideal Observer SNR model, and elaborates on the noise power spectrum. It presents cross-correlation analysis results for matched-filter detection of a tribar pattern in sets of resolution target images that were captured with three image sensors over a range of illumination levels. Lastly, the work compares the crosscorrelation data to predictions made by the Ideal Observer Model and demonstrates good agreement between the two methods on relative evaluation of detection capabilities.

Digital Library: EI
Published Online: January  2020
  104  20
Image
Pages 79-1 - 79-8,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

Cameras sensors are crucial for autonomous driving as they are the only sensing modality that provide measured color information of the surrounding scene. Cameras are directly exposed to external weather conditions where visibility is dramatically affected due to various reasons such as rain, ice, fog, soil, ..etc. Hence, it is crucial to detect and remove the visibility degradation caused by the harsh weather conditions. In this paper, we focus mainly on soiling degradation. We provide methods for classification of the soiled parts as well as methods for estimating the scene behind the soiled parts. A new dataset is created providing manually annotated soiled masks knows as WoodScape dataset to encourage research in that area.

Digital Library: EI
Published Online: January  2020
  262  59
Image
Pages 80-1 - 80-9,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

Sun glare is a commonly encountered problem in both manual and automated driving. Sun glare causes over-exposure in the image and significantly impacts visual perception algorithms. For higher levels of automated driving, it is essential for the system to understand that there is sun glare which can cause system degradation. There is very limited literature on detecting sun glare for automated driving. It is primarily based on finding saturated brightness areas and extracting regions via image processing heuristics. From the perspective of a safety system, it is necessary to have a highly robust algorithm. Thus we designed two complementary algorithms using classical image processing techniques and CNN which can learn global context. We also discuss how sun glare detection algorithm will efficiently fit into a typical automated driving system. As there is no public dataset, we created our own and will release it publicly via theWoodScape project [1] to encourage further research in this area.

Digital Library: EI
Published Online: January  2020
  31  4
Image
Pages 81-1 - 81-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

Haze is one of the sources cause image degradation. Haze affects contrast and saturation of not only for the real world image, but also the road scenes. Most haze removal algorithms use an atmospheric scattering model for removing the effect of haze. Most of haze removal algorithms are based on the single scattering model which does not consider the blur in the haze image. In this paper, a novel haze removal algorithm using a multiple scattering model with deconvolution is proposed. The proposed algorithm considers blurring effect in the haze image. Down sampling of the haze image is also used for estimating the atmospheric light efficiently. The synthetic road scenes with and without haze are used to evaluate the performance of the proposed method. Experimental result demonstrates that the proposed algorithm performs better for restoring images affected by haze both qualitatively and quantitatively.

Digital Library: EI
Published Online: January  2020
  24  1
Image
Pages 88-1 - 88-5,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 16

A primary goal of the auto industry is to revolutionize transportation with autonomous vehicles. Given the mammoth nature of such a target, success depends on a clearly defined balance between technological advances, machine learning algorithms, physical and network infrastructure, safety, standards and regulations, and end-user education. Unfortunately, technological advancement is outpacing the regulatory space and competition is driving deployment. Moreover, hope is being built around algorithms that are far from reaching human-like capacities on the road. Since human behaviors and idiosyncrasies and natural phenomena are not going anywhere anytime soon and so-called edge cases are the roadway norm, the industry stands at a historic crossroads. Why? Because human factors such as cognitive and behavioral insights into how we think, feel, act, plan, make decisions, and problem-solve have been ignored. Human cognitive intelligence is foundational to driving the industry’s ambition forward. In this paper I discuss the role of the human in bridging the gaps between autonomous vehicle technology, design, implementation, and beyond.

Digital Library: EI
Published Online: January  2020

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]