Image Transformation based Features for the Visual Discrimination of Prominent and Non-ProminentWords
Conference: Sprachkommunikation - Beiträge zur 10. ITG-Fachtagung
09/26/2012 - 09/28/2012 at Braunschweig, Deutschland
Pages: 4Language: englishTyp: PDFPersonal VDE Members are entitled to a 10% discount on this title
Martin Heckmann (Honda Research Institute Europe GmbH, 63073 Offenbach/Main, Germany)
This paper investigates how visual information extracted from a speaker’s mouth region can be used to discriminate prominent from non-prominent words. The analysis relies on a database where users interacted in a small game with a computer in a Wizard of Oz experiment. Users were instructed to correct recognition errors of the system. This was expected to render the corrected word highly prominent. Audio-visual recordings with a distant microphone and without visual markers were made. As acoustic features relative energy and fundamental frequency were calculated. From the visual channel image transformation based features from the mouth region were extracted. As image transformations FFT, DCT and PCA with a varying number of coefficients are compared in this paper. Thereby the performance of the visual features by themselves or in combination with the acoustic features is investigated. The comparison is based on the classification with a Support Vector Machine (SVM). The results show that all three image transformations yield a performance of approx. 65% in this binary classification task. Furthermore, the information extracted from the visual channel is complementary to the acoustic information. The combination of both modalities significantly improves performance up to approx. 80%.