Back to articles
Volume: 32 | Article ID: art00021
LiDAR-Camera Fusion for 3D Object Detection
  DOI :  10.2352/ISSN.2470-1173.2020.16.AVM-257  Published OnlineJanuary 2020

The performance of autonomous agents in both commercial and consumer applications increases along with their situational awareness. Tasks such as obstacle avoidance, agent to agent interaction, and path planning are directly dependent upon their ability to convert sensor readings into scene understanding. Central to this is the ability to detect and recognize objects. Many object detection methodologies operate on a single modality such as vision or LiDAR. Camera-based object detection models benefit from an abundance of feature-rich information for classifying different types of objects. LiDAR-based object detection models use sparse point clouds, where each point contains accurate 3D position of object surfaces. Camera-based methods lack accurate object to lens distance measurements, while LiDAR-based methods lack dense feature-rich details. By utilizing information from both camera and LiDAR sensors, advanced object detection and identification is possible. In this work, we introduce a deep learning framework for fusing these modalities and produce a robust real-time 3D bounding box object detection network. We demonstrate qualitative and quantitative analysis of the proposed fusion model on the popular KITTI dataset.

Subject Areas :
Views 98
Downloads 44
 articleview.views 98
 articleview.downloads 44
  Cite this article 

Darshan Bhanushali, Robert Relyea, Karan Manghi, Abhishek Vashist, Clark Hochgraf, Amlan Ganguly, Andres Kwasinski, Michael E. Kuhl, Raymond Ptucha, "LiDAR-Camera Fusion for 3D Object Detectionin Proc. IS&T Int’l. Symp. on Electronic Imaging: Autonomous Vehicles and Machines,  2020,  pp 257-1 - 257-9,

 Copy citation
  Copyright statement 
Copyright © Society for Imaging Science and Technology 2020
Electronic Imaging
Society for Imaging Science and Technology
7003 Kilworth Lane, Springfield, VA 22151 USA