An Investigation on Mutual Information for the Linear Predictive System and the Extrapolation of Speech Signals
Konferenz: Sprachkommunikation - Beiträge zur 10. ITG-Fachtagung
26.09.2012 - 28.09.2012 in Braunschweig, Deutschland
Seiten: 4Sprache: EnglischTyp: PDFPersönliche VDE-Mitglieder erhalten auf diesen Artikel 10% Rabatt
Taghia, Jalal; Martin, Rainer (Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany)
Taghia, Jalil; Leijon, Arne (Sound and Image Processing Lab, KTH Royal Institute of Technology, Stockholm, Sweden)
Mutual information (MI) is an important information theoretic concept which has many applications in telecommunications, in blind source separation, and in machine learning. More recently, it has been also employed for the instrumental assessment of speech intelligibility where traditionally correlation based measures are used. In this paper, we address the difference between MI and correlation from the viewpoint of discovering dependencies between variables in the context of speech signals. We perform our investigation by considering the linear predictive approximation and the extrapolation of speech signals as examples. We compare a parametric MI estimation approach based on a Gaussian mixture model (GMM) with the knearest neighbor (KNN) approach which is a well-known non-parametric method available to estimate the MI. We show that the GMM-based MI estimator leads to more consistent results.