Regular
Adaptive Radial Distance
CARLA Simulation
fisheyeFace Detectionfield-of-view
Graph Neural Network (GNN)
Image classification
KNN Algorithm
Long Short-Term Memory (LSTM)
Modulation Transfer Function
natural scenesNaturalistic Driving Studies
Object Detection
Privacy
regional maskingRegion of Interest Selection
Spatial Frequency ResponseSegment Anything (SAM)Spatio-temporal relationsslanted edge
Trajectory Prediction
 Filters
Month and year
 
  93  33
Image
Pages 109-1 - 109-6,  © 2024, Society for Imaging Science and Technology 2024
Volume 36
Issue 17
Abstract

The Modulation Transfer Function (MTF) is an important image quality metric typically used in the automotive domain. However, despite the fact that optical quality has an impact on the performance of computer vision in vehicle automation, for many public datasets, this metric is unknown. Additionally, wide field-of-view (FOV) cameras have become increasingly popular, particularly for low-speed vehicle automation applications. To investigate image quality in datasets, this paper proposes an adaptation of the Natural Scenes Spatial Frequency Response (NS-SFR) algorithm to suit cameras with a wide field-of-view.

Digital Library: EI
Published Online: January  2024
  40  13
Image
Pages 112-1 - 112-6,  © 2024, Society for Imaging Science and Technology 2024
Volume 36
Issue 17
Abstract

Naturalistic driving studies consist of drivers using their personal vehicles and provide valuable real-world data, but privacy issues must be handled very carefully. Drivers sign a consent form when they elect to participate, but passengers do not for a variety of practical reasons. However, their privacy must still be protected. One large study includes a blurred image of the entire cabin which allows reviewers to find passengers in the vehicle; this protects the privacy but still allows a means of answering questions regarding the impact of passengers on driver behavior. A method for automatically counting the passengers would have scientific value for transportation researchers. We investigated different image analysis methods for automatically locating and counting the non-drivers including simple face detection and fine-tuned methods for image classification and a published object detection method. We also compared the image classification using convolutional neural network and vision transformer backbones. Our studies show the image classification method appears to work the best in terms of absolute performance, although we note the closed nature of our dataset and nature of the imagery makes the application somewhat niche and object detection methods also have advantages. We perform some analysis to support our conclusion.

Digital Library: EI
Published Online: January  2024
  81  16
Image
Pages 115-1 - 115-6,  © 2024, Society for Imaging Science and Technology 2024
Volume 36
Issue 17
Abstract

Predicting the trajectory of an ego vehicle is a critical component of autonomous driving systems. Current state-of-the-art methods typically rely on Deep Neural Networks (DNNs) and sequential models to process front-view images for future trajectory prediction. However, these approaches often struggle with perspective issues affecting object features in the scene. To address this, we advocate for the use of Bird’s Eye View (BEV) perspectives, which offer unique advantages in capturing spatial relationships and object homogeneity. In our work, we leverage Graph Neural Networks (GNNs) and positional encoding to represent objects in a BEV, achieving competitive performance compared to traditional DNN-based methods. While the BEV-based approach loses some detailed information inherent to front-view images, we balance this by enriching the BEV data by representing it as a graph where relationships between the objects in a scene are captured effectively.

Digital Library: EI
Published Online: January  2024

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]