A Data Generation Framework for Acoustic Drone Detection Algorithms

Conference: Speech Communication - 14th ITG Conference
09/29/2021 - 10/01/2021 at online

Proceedings: ITG-Fb. 298: Speech Communication

Pages: 5Language: englishTyp: PDF

Personal VDE Members are entitled to a 10% discount on this title

Authors:
Jarocky, Nikita; Urrigshardt, Sebastian; Kurth, Frank (Fraunhofer FKIE, Wachtberg, Germany)

Abstract:
State-of-the-art drone detection systems are generally combining different sensors, such as radar, acoustic, radio frequency (RF-) or optic sensors, each of which contributes individual capabilities. Acoustic sensors have a relatively low spatial range. On the other hand, they can be used in conditions of bad visibility and for scenarios without line-of-sight between sensor and drone. The development of acoustic drone detection (ADD) algorithms typically requires substantial amounts of realistic recordings of flying drones. Moreover, when evaluating methods for robustly extracting characteristic properties of drones, such as the fundamental frequency (F0) of the rotor blades from sensor data, knowledge of ground truth (GT) data is important. In this paper we present a framework for automatically generating both acoustic drone recordings and GT transcripts of the corresponding time-synchronous motor rotations. We present an application using the thus generated data for evaluating an ADD algorithm based on F0 tracking.