In autonomous vehicle systems, sensing the surrounding environment is important to an intelligent vehicle's making the right decision about the action. Understanding the neighboring environment from sensing data can enable the vehicle to be aware of other moving objects nearby (e.g., vehicles or pedestrians) and therefore avoid collisions. This local situational awareness mostly depends on extracting information from a variety of sensors (e.g. camera, LIDAR, RADAR) each of which has its own operating conditions (e.g., lighting, range, power). One of the open issues in the reconstruction and understanding of the environment of autonomous vehicle is how to fuse locally sensed data to support a specific decision task such as vehicle detection. In this paper, we study the problem of fusing data from camera and LIDAR sensors and propose a novel 6D (RGB+XYZ) data representation to support visual inference. This work extends previous Position and Intensity-included Histogram of Oriented Gradient (PIHOG or πHOG) from color space to the proposed 6D space, which targets at achieving more reliable vehicle detection than single-sensor approach. Our experimental result have validated the effectiveness of the proposed multi-sensor data fusion approach - i.e., it achieves the detection accuracy of 73% on the challenging KITTI dataset.
Abu Hasnat Mohammad Rubaiyat, Yaser Fallah, Xin Li, Gaurav Bansal, Toyota Infotechnology, "Multi-sensor Data Fusion for Vehicle Detection in Autonomous Vehicle Applications" in Proc. IS&T Int’l. Symp. on Electronic Imaging: Autonomous Vehicles and Machines, 2018, pp 257-1 - 257-6, https://doi.org/10.2352/ISSN.2470-1173.2018.17.AVM-257