Semantic Segmentation Guided SLAM Using Vision and LIDAR

Conference: ISR 2018 - 50th International Symposium on Robotics
06/20/2018 - 06/21/2016 at München, Germany

Proceedings: ISR 2018

Pages: 7Language: englishTyp: PDF

Personal VDE Members are entitled to a 10% discount on this title

Authors:
Patel, Naman; Krishnamurthy, Prashanth (Control/Robotics Research Laboratory (CRRL), Department of Electrical & Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY 11201, USA)
Farshad Khorrami

Abstract:
This paper presents a novel framework for incorporating semantic information in a Simultaneous Localization and Mapping (SLAM) framework based on LIDAR and camera to improve navigation accuracy and alleviate drifts caused by translation and rotation errors. Specifically, an unmanned ground vehicle (UGV) equipped with a camera and LIDAR, operating in an indoor environment is considered. The proposed method uses features extracted from a camera and its correspondences in the LIDAR depth map to obtain the pose relative to a keyframe which is refined by semantic features obtained from a deep neural network. Additionally, each point in the map is associated with a semantic label to perform semantically guided local and global pose optimization. Since semantically correlated features can be expected to have higher likelihood of correct data association, the proposed coupling of semantic labeling and SLAM provides better robustness and accuracy. We demonstrate our approach using an unmanned ground vehicle (UGV) operating in an indoor environment equipped with a camera and a LIDAR.