Point Cloud Segmentation with a High-Resolution Automotive Radar

Konferenz: AmE 2019 – Automotive meets Electronics - 10. GMM-Fachtagung
12.03.2019 - 13.03.2019 in Dortmund, Deutschland

Tagungsband: GMM-Fb. 93: AmE 2019

Seiten: 5Sprache: EnglischTyp: PDF

Persönliche VDE-Mitglieder erhalten auf diesen Artikel 10% Rabatt

Autoren:
Feng, Zhaofei; Zhang, Shuo; Kunert, Martin (Advanced Engineering Sensor Systems, Robert Bosch GmbH, 71226 Leonberg, Germany)
Wiesbeck, Werner (Institut für Hochfrequenztechnik und Elektronik, Karlsruher Institut für Technologie, 76131 Karlsruhe, Germany)

Inhalt:
Deep Learning is currently widely used in video-based classification and segmentation tasks for autonomous vehicles. With images as input, pixels can be used as structured input data for a convolutional neural network. In order to improve the reliability of the perception system for automated vehicles, other sensors can simultaneously be employed to complement the shortcomings of the video camera sensors. Among these sensor candidates, RADAR and LIDAR sensors are the two prominent ones. For these two sensors, the input data format is totally different from that of the camera sensor, where object reflection points are collected instead of pixels. A simple way to handle these reflection points is to manually create a grid map and fill it up with the reflection points. However, this causes not only information loss during the filling process, but also increases the input data size significantly because of the large number of grid cells, especially when a 3D grid representation is used. So it can be advantageous to feed the reflection points directly into the neural network. Such unstructured input data format, however, will not work in conventional, grid-based convolutional neural networks. In this paper, we will present application results with a recently developed point-based neural network for radar reflection point cloud segmentation.