Ego-Motion Correction based on Static Objects detected by an Automotive Lidar Sensor System

Konferenz: AmE 2019 – Automotive meets Electronics - 10. GMM-Fachtagung
12.03.2019 - 13.03.2019 in Dortmund, Deutschland

Tagungsband: GMM-Fb. 93: AmE 2019

Seiten: 6Sprache: EnglischTyp: PDF

Persönliche VDE-Mitglieder erhalten auf diesen Artikel 10% Rabatt

Stannartz, Niklas; Wissing, Christian; Bertram, Torsten (TU Dortmund University, Institute of Control Theory and Systems Engineering, 44227 Dortmund, Germany)
Krueger, Martin; Tolmidis, Avraam; Ali, Syed Irtiza; Nattermann, Till (ZF, Automated Driving & Integral Cognitive Safety, 40547 Düsseldorf, Germany)

This paper presents a framework for ego-motion correction exploiting static objects in the environment detected by an automotive lidar sensor system. Many previous works tackle this problem using raw point cloud data yielding an improved performance with respect to dead reckoning (DR) at the cost of neglecting embedded object tracking algorithms integrated in automotive sensors while increasing the computational demands. Since exteroceptive sensors (cameras, radars and lidars) typically segment distinct objects of the environment from the raw measurements, the use of classified static objects resemble a natural choice for an egomotion correction reducing the computational overhead in parallel. The egomotion correction is based on an observability-constrained Unscented Kalman Filter (OCUKF) exploiting detected pointand line-shaped static objects that are temporally maintained in a local map of the visible surroundings. For the latter type a measurement model for continuous line-shaped landmarks is proposed that tackles disadvantages of the commonly used model in the literature. Finally, experimental results show the improved ego-motion estimation of the OCUKF algorithm with respect to DR.