TY - CHAP A1 - Ghahremani, Pegah A1 - BabaAli, Bagher A1 - Povey, Daniel A1 - Riedhammer, Korbinian A1 - Trmal, Jan A1 - Khudanpur, Sanjeev T1 - A Pitch Extraction Algorithm Tuned for Automatic Speech Recognition T2 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, May 2014. N2 - In this paper we present an algorithm that produces pitch and probability-of-voicing estimates for use as features in automatic speech recognition systems. These features give large performance improvements on tonal languages for ASR systems, and even substantial improvements for non-tonal languages. Our method, which we are calling the Kaldi pitch tracker (because we are adding it to the Kaldi ASR toolkit), is a highly modified version of the getf0 (RAPT) algorithm. Unlike the original getf0 we do not make a hard decision whether any given frame is voiced or unvoiced; instead, we assign a pitch even to unvoiced frames while constraining the pitch trajectory to be continuous. Our algorithm also produces a quantity that can be used as a probability of voicing measure; it is based on the normalized autocorrelation measure that our pitch extractor uses. We present results on data from various languages in the BABEL project, and show a large improvement over systems without tonal features and systems where pitch and POV information was obtained from SAcC or getf0. KW - Automatic Speech Recognition Y1 - 2014 PB - IEEE ER - TY - CHAP A1 - Riedhammer, Korbinian A1 - Bocklet, Tobias A1 - Goshal, Arnab A1 - Povey, Daniel T1 - Revisiting Semi-Continuous Hidden Markov Models T2 - 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, March 2012. N2 - In the past decade, semi-continuous hidden Markov models (SCHMMs) have not attracted much attention in the speech recognition community. Growing amounts of training data and increasing sophistication of model estimation led to the impression that continuous HMMs are the best choice of acoustic model. However, recent work on recognition of under-resourced languages faces the same old problem of estimating a large number of parameters from limited amounts of transcribed speech. This has led to a renewed interest in methods of reducing the number of parameters while maintaining or extending the modeling capabilities of continuous models. In this work, we compare classic and multiple-codebook semi-continuous models using diagonal and full covariance matrices with continuous HMMs and subspace Gaussian mixture models. Experiments on the RM and WSJ corpora show that while a classical semicontinuous system does not perform as well as a continuous one, multiple-codebook semi-continuous systems can perform better, particular when using full-covariance Gaussians. KW - automatic speech recognition KW - acoustic modeling Y1 - 2012 SP - 4721 EP - 4724 PB - IEEE ER - TY - CHAP A1 - Povey, Daniel A1 - Hannemann, Mirko A1 - Boulianne, Gilles A1 - Burget, Lukáš A1 - Ghoshal, Arnab A1 - Janda, Miloš A1 - Karafiát, Martin A1 - Kombrink, Stefan A1 - Motlíček, Petr A1 - Qian, Yanmin A1 - Riedhammer, Korbinian A1 - Veselý, Karel A1 - Thang Vu, Ngoc T1 - Generating Exact Lattices in the WFST Framework T2 - 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, May 2012. N2 - We describe a lattice generation method that is exact, i.e. it satisfies all the natural properties we would want from a lattice of alternative transcriptions of an utterance. This method does not introduce substantial overhead above one-best decoding. Our method is most directly applicable when using WFST decoders where the WFST is “fully expanded”, i.e. where the arcs correspond to HMM transitions. It outputs lattices that include HMM-state-level alignments as well as word labels. The general idea is to create a state-level lattice during decoding, and to do a special form of determinization that retains only the best-scoring path for each word sequence. This special determinization algorithm is a solution to the following problem: Given a WFST A, compute a WFST B that, for each input-symbol-sequence of A, contains just the lowest-cost path through A. KW - Speech Recognition KW - Lattice Generation Y1 - 2012 SP - 4213 EP - 4216 ER -