Regular
autonomousautonomous systemsAdversarial Trainingautonomous drivingAutonomous VehiclesAdvanced driving assistance systems (ADAS)Autonomous SurveillanceAutonomous DrivingAutomotive imagingAutonomous drivingAutomated DrivingADASAutonomous driving (AD)Automobile
Bayesian networkBehavior Recognition
contrast detection probabilityColor correctioncolor correctionConvolutional Neural NetworksComputer VisionComputer visioncognitionCNNComputer graphicscontext understandingConvolutional Neural NetworkCollision WarningCounter-Drone systemCDP
Demosaicingdynamic rangeDetection probabilitydecision-makingData compressionDriver behaviorDeep Learningdual-band filterDriver Assistance System
eye tracking for AR HUD
FreeSpaceFeature extractionforest
Generative Adversarial Networks (GAN)
Highway pilotHyperspectral VisionHidden Markov modelHyperspectral visionhuman-machine interaction
in-cabin imagingIntegrated Surrounding SurveillanceImage processingIntelligent VehiclesImage systems engineeringIdeal ObserverImage Signal ProcessingImage Compressionimage quality
localizationLidar sensorsLSTM
Machine-learningmobileMachine learningmap generationMachine Learning
near infra-redNeural networknavigationNoise equivalent quanta
Obstacle DetectionObject detection
Point CloudProbabilistic modelingPerspectively Correct Surround Viewpupil detection and trackingP2020Perception
RecognitionRelocalizationrobotRGB-IRReconstructionRGB & NIR camera
SNRSparsitySLAMSkeletonShadow detectionScene UnderstandingStereo VisionSensors and Processorsspontaneous problem-solving
Traffic Sign Recognition
Video SurveillanceVisual Perception
 Filters
Month and year
 
  19  2
Image
Pages A15-1 - A15-8,  © Society for Imaging Science and Technology 2019
Digital Library: EI
Published Online: January  2019
  147  40
Image
Pages 27-1 - 27-8,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

Image quality (IQ) metrics are used to assess the quality of a detected image under a specified set of capture and display conditions. A large volume of work on IQ metrics has considered the quality of the image from an aesthetic point of view — visual perception and appreciation of the final result. Metrics have also been developed for “informational” applications such as medical imaging, aerospace and military systems, scientific imaging and industrial imaging. In these applications the criteria for image quality are based on information content and the ability to detect, identify and recognize objects from a captured image. Development of automotive imaging systems requires IQ metrics that are useful in automotive imaging. Many of the metrics developed for informational imaging are also potentially useful in automotive imaging, since many of the tasks — for example object detection and identification — are similar. In this paper, we review the Signal to Noise Ratio of the Ideal Observer and present it as a useful metric for determining whether an object can be detected with confidence, given the characteristics of an automotive imaging system. We also show how this metric can be used to optimize system parameters for a defined task.

Digital Library: EI
Published Online: January  2019
  114  61
Image
Pages 30-1 - 30-13,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

The automotive industry formed the initiative IEEE-P2020 to jointly work on key performance indicators (KPIs) that can be used to predict how well a camera system suits the use cases. A very fundamental application of cameras is to detect object contrasts for object recognition or stereo vision object matching. The most important KPI the group is working on is the contrast detection probability (CDP), a metric that describes the performance of components and systems and is independent from any assumptions about the camera model or other properties. While the theory behind CDP is already well established, we present actual measurement results and the implementation for camera tests. We also show how CDP can be used to improve low light sensitivity and dynamic range measurements.

Digital Library: EI
Published Online: January  2019
  142  5
Image
Pages 31-1 - 31-6,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

Hyperspectral image classification has received more attention from researchers in recent years. Hyperspectral imaging systems utilize sensors, which acquire data mostly from the visible through the near infrared wavelength ranges and capture tens up to hundreds of spectral bands. Using the detailed spectral information, the possibility of accurately classifying materials is increased. Unfortunately conventional spectral cameras sensors use spatial or spectral scanning during acquisition which is only suitable for static scenes like earth observation. In dynamic scenarios, such as in autonomous driving applications, the acquisition of the entire hyperspectral cube in one step is mandatory. To allow hyperspectral classification and enhance terrain drivability analysis for autonomous driving we investigate the eligibility of novel mosaic-snapshot based hyperspectral cameras. These cameras capture an entire hyperspectral cube without requiring moving parts or line-scanning. The sensor is mounted on a vehicle in a driving scenario in rough terrain with dynamic scenes. The captured hyperspectral data is used for terrain classification utilizing machine learning techniques. A major problem, however, is the presence of shadows in captured scenes, which degrades the classification results. We present and test methods to automatically detect shadows by taking advantage of the near-infrared (NIR) part of spectrum to build shadow maps. By utilizing these shadow maps a classifier may be able to produce better results and avoid misclassifications due to shadows. The approaches are tested on our new hand-labeled hyperspectral dataset, acquired by driving through suburban areas, with several hyperspectral snapshotmosaic cameras.

Digital Library: EI
Published Online: January  2019
  35  0
Image
Pages 32-1 - 32-10,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

The safe navigation of large commercial vehicles implies the extended need for the surveillance of the direct surroundings. Nowadays surround view systems show heavy distortions and only have a limited range around the vehicle which may not be far enough to detect obstacles or persons early. We present a novel method which fuses an advanced perspectively correct surround view with an advanced obstacle detection. The method proposed is based on stereo vision and uses geometric modelling of the en- vironment using a grid map data structure. The grid map is pro- cessed by a refinement algorithm to overcome limitations of the grid map when approximating shapes of the obstacles which are highlighted in the surround view.

Digital Library: EI
Published Online: January  2019
  47  2
Image
Pages 33-1 - 33-8,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

Road traffic signs provide vital information about the traffic rules, road conditions, and route directions to assist drivers in safe driving. Recognition of traffic signs is one of the key features of Advanced Driver Assistance Systems (ADAS). In this paper, we present a Convolutional Neural Network (CNN) based approach for robust Traffic Sign Recognition (TSR) that can run real-time on low power embedded systems. To achieve this, we propose a twostage network: In the first stage, a generic traffic sign detection network localizes the position of traffic signs in the video footage, and in the second stage a country-specific classification network classifies the detected signs. The network sub-blocks were retrained to generate an optimal network that runs real-time on the Nvidia Tegra platform. The network?s computational complexity and the model size are further reduced to make it deployable on low power embedded platforms. Methods like network customization, weight pruning, and quantization schemes were used to achieve an 8X reduction in computation complexity. The pruned and optimized network is further ported and benchmarked on embedded platforms like Texas Instruments Jacinto TDA2x SoC and Qualcomm?s Snapdragon 820Automotive platform.

Digital Library: EI
Published Online: January  2019
  152  5
Image
Pages 34-1 - 34-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

This paper explores the use of stixels in a probabilistic stereo vision-based collision-warning system that can be part of an ADAS for intelligent vehicles. In most current systems, collision warnings are based on radar or on monocular vision using pat- tern recognition (and ultra-sound for park assist). Since detect- ing collisions is such a core functionality of intelligent vehicles, redundancy is key. Therefore, we explore the use of stereo vi- sion for reliable collision prediction. Our algorithm consists of a Bayesian histogram filter that provides the probability of collision for multiple interception regions and angles towards the vehicle. This could additionally be fused with other sources of informa- tion in larger systems. Our algorithm builds upon the dispar- ity Stixel World that has been developed for efficient automotive vision applications. Combined with image flow and uncertainty modeling, our system samples and propagates asteroids, which are dynamic particles that can be utilized for collision prediction. At best, our independent system detects all 31 simulated collisions (2 false warnings), while this setting generates 12 false warnings on the real-world data.

Digital Library: EI
Published Online: January  2019
  223  75
Image
Pages 35-1 - 35-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

In this work, we present a computer vision and machine learning backed autonomous drone surveillance system, in order to protect critical locations. The system is composed of a wide angle, high resolution daylight camera and a relatively narrow angle thermal camera mounted on a rotating turret. The wide angle daylight camera allows the detection of flying intruders, as small as 20 pixels with a very low false alarm rate. The primary detection is based on YOLO convolutional neural network (CNN) rather than conventional background subtraction algorithms due its low false alarm rate performance. At the same time, the tracked flying objects are tracked by the rotating turret and classified by the narrow angle, zoomed thermal camera, where classification algorithm is also based on CNNs. The training of the algorithms is performed by artificial and augmented datasets due to scarcity of infrared videos of drones.

Digital Library: EI
Published Online: January  2019
  22  1
Image
Pages 39-1 - 39-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

This paper proposes a method with which mobile robots can explore unknown environments efficiently and effectively. For mobile robots, developing exploration methods in unknown environments is a challenging target. Navigation for environment monitoring is necessary in order to explore efficiently in unknown environment. Although a frontier-based exploration can be seen as a conventional method, this method is not necessarily a global optimal planning method, because it explores for the local optimal in each situation; in the monitoring task, it could fall into the local solution, because it could not take into consideration the overall observation efficiency. The proposed method exploits the whole observation and knowledge about the area of the unknown environment. It is possible for the proposed method to establish a global plan in consideration of efficiency and effectiveness by integrating pattern and frontier-based exploration. In simulation, it is shown that the proposed method of pattern and frontier-based exploration is useful for exploring unknown environments efficiently and effectively.

Digital Library: EI
Published Online: January  2019
  18  1
Image
Pages 40-1 - 40-7,  © Society for Imaging Science and Technology 2019
Volume 31
Issue 15

Autonomous robots and self-driving vehicles require agents to learn and maintain accurate maps for safe and reliable operation. We use a variant of pose-graph Simultaneous Localization and Mapping (SLAM) to integrate multiple sensors for autonomous navigation in an urban environment. Our methods efficiently and accurately localize the agent across a stack of maps generated from different sensors across different periods of time. To incorporate a priori localization data, we account for the discrepancies between LiDAR observations and publicly available building geometry. We fuse data derived from heterogeneous sensor modalities to increase invariance to dynamic environmental factors, such as weather, luminance, and occlusions. To discriminate traversable terrain, we employ a deep segmentation network whose predictions increase the confidence of a LiDAR-generated cost map. Path planning is accomplished using the Timed-ElasticBand algorithm on the persistent map created through SLAM. We evaluate our method in varying environmental conditions on a large university campus and show the efficacy of the sensor and map fusion.

Digital Library: EI
Published Online: January  2019

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]