Sequence Modeling and Alignment for LVCSR-Systems

Conference: Speech Communication - 13. ITG-Fachtagung Sprachkommunikation
10/10/2018 - 10/12/2018 at Oldenburg, Deutschland

Proceedings: Speech Communication

Pages: 5Language: englishTyp: PDF

Personal VDE Members are entitled to a 10% discount on this title

Authors:
Beck, Eugen; Zeyer, Albert; Doetsch, Patrick; Merboldt, Andre; Schlueter, Ralf; Ney, Hermann (Lehrstuhl Informatik 6, RWTH Aachen University, Germany)

Abstract:
Today, modeling automatic speech recognition (ASR) systems using deep neural networks (DNNs) has led to considerable improvements in performance, with word error rates being approximately halved compared to the status we had 10 to 15 years ago. Current state-of-the-art systems, at least if they are trained on moderate to medium amounts of training data, still follow the classical separation into language models and generative acoustic models. Acoustic modeling in these systems follows the socalled hybrid HMM approach. However, in the last years, many efforts were started to derive end-to-end models for ASR, which naturally follow the discriminative structure of neural networks. These include alternative solutions for the alignment problem underlying ASR, which in classical systems has been solved using hidden Markov models (HMMs). In this work we discuss and analyze two novel approaches to DNN-based ASR, the attention-based encoder–decoder approach, and the (segmental) inverted HMM approach. Experimental results are presented on the well-known Switchboard corpus and are compared against the standard hybrid approach, with specific focus on the sequence alignment behavior of the different approaches.