Recording a Complex, Multi Modal Activity Data Set for Context Recognition

Konferenz: ARCS 2010 - 23th International Conference on Architecture of Computing Systems
22.02.2010 - 23.02.2010 in Hannover, Germany

Tagungsband: ARCS 2010

Seiten: 6Sprache: EnglischTyp: PDF

Persönliche VDE-Mitglieder erhalten auf diesen Artikel 10% Rabatt

Autoren:
Lukowicz, P.; Pirkl, G.; Bannach, D.; Wagner, F. (Embedded Systems Lab, University of Passau, Germany)
Calatroni, A.; Förster, K.; Holleczek, T.; Rossi, M.; Roggen, D.; Troester, G. (Wearable Computing Lab, ETH, Switzerland)
Doppler, J.; Holzmann, C.; Riener, A.; Ferscha, A. (Institute Pervasive Computing, JKU Linz, Austria)
Chavarriaga, R. (Defitech Foundation Chair in Non-Invasive Brain-Machine Interface, EPFL Lausanne, Switzerland)

Inhalt:
Publicly available data sets are increasingly becoming an important research tool in context recognition. However, due to the diversity and complexity of the domain it is difficult to provide standard recordings that cover the majority of possible applications and research questions. In this paper we describe a novel data set hat combines a number of properties, that, in this combination, are missing from existing data sets. This includes complex, overlapping and hierarchically decomposable activities, a large number of repetitions, significant number of different users and a highly multi modal sensor setup. The set contains around 25 hours of data from 12 subjects. On the low level there are around 30000 individual annotated actions (e.g. picking up a knife, opening a drawer). On the highest level (e.g. getting up, breakfast preparation) we have around 200 context instances. Overall 72 sensors from 10 different modalities (different on body motion sensors, different sound sources, two cameras, video, object usage, device power consumption and location) were recorded.