Description
The performance of autonomous agents in both commercial and consumer applications increases along with their situational awareness. Tasks such as obstacle avoidance, agent to agent interaction, and path planning are directly dependent upon their ability to convert sensor readings into scene understanding. Central to this is the ability to detect and recognize objects. Many object detection methodologies operate on a single modality such as vision or LiDAR. Camera-based object detection models benefit from an abundance of feature-rich information for classifying different types of objects. LiDAR-based object detection models use sparse point clouds, where each point contains accurate 3D position of object surfaces. Camera-based methods lack accurate object to lens distance measurements, while LiDAR-based methods lack dense feature-rich details. By utilizing information from both camera and LiDAR sensors, advanced object detection and identification is possible. In this work, we introduce a deep learning framework for fusing these modalities and produce a robust real-time 3D bounding box object detection network. We demonstrate qualitative and quantitative analysis of the proposed fusion model on the popular KITTI dataset.
Date of creation, presentation, or exhibit
1-26-2020
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Document Type
Conference Paper
Department, Program, or Center
Industrial and Systems Engineering (KGCOE)
Recommended Citation
D. Bhanushali et al., “LiDAR-Camera Fusion for 3D Object Detection,” Electronic Imaging, vol. 2020, no. 16, pp. 257-1-257–9, Jan. 2020, doi: 10.2352/ISSN.2470-1173.2020.16.AVM-257.
Campus
RIT – Main Campus