A Data Generation Framework for Acoustic Drone Detection Algorithms

Konferenz: Speech Communication - 14th ITG Conference
29.09.2021 - 01.10.2021 in online

Tagungsband: ITG-Fb. 298: Speech Communication

Seiten: 5Sprache: EnglischTyp: PDF

Persönliche VDE-Mitglieder erhalten auf diesen Artikel 10% Rabatt

Autoren:
Jarocky, Nikita; Urrigshardt, Sebastian; Kurth, Frank (Fraunhofer FKIE, Wachtberg, Germany)

Inhalt:
State-of-the-art drone detection systems are generally combining different sensors, such as radar, acoustic, radio frequency (RF-) or optic sensors, each of which contributes individual capabilities. Acoustic sensors have a relatively low spatial range. On the other hand, they can be used in conditions of bad visibility and for scenarios without line-of-sight between sensor and drone. The development of acoustic drone detection (ADD) algorithms typically requires substantial amounts of realistic recordings of flying drones. Moreover, when evaluating methods for robustly extracting characteristic properties of drones, such as the fundamental frequency (F0) of the rotor blades from sensor data, knowledge of ground truth (GT) data is important. In this paper we present a framework for automatically generating both acoustic drone recordings and GT transcripts of the corresponding time-synchronous motor rotations. We present an application using the thus generated data for evaluating an ADD algorithm based on F0 tracking.