Refine
Year of publication
Document Type
Has Fulltext
- no (24)
Is part of the Bibliography
- no (24)
Keywords
- Speech Recognition (8)
- Automatic Evaluation (2)
- Automatic Speech Recognition (2)
- GMM (2)
- Spracherkennung (2)
- automatic assessment (2)
- visualization (2)
- Acoustic applications (1)
- Auswertungsmethoden (1)
- Automatische Mustererkennung (1)
Institute
In this article, we describe a semi-automatic calibration algorithm for dereverberation by spectral subtraction. We verify the method by a comparison to a manual calibration derived from measured room impulse responses (RIR). We conduct extensive experiments to understand the effect of all involved parameters and to verify values suggested in the literature. The experiments are performed on a text read by 31 speakers and recorded by a headset and three far-field microphones. Results are measured in terms of automatic speech recognition (ASR) performance using a 1-gram model to emphasize acoustic recognition performance. To accommodate for the acoustic change by dereverberation we apply supervised MAP adaptation to the hidden Markov model output probabilities. The combination of dereverberation and adaptation yields a relative improvement of about 35% in terms of word error rate (WER) compared to the original signal.
In this paper we describe Erlangen-CLP, a large speech database of children with Cleft Lip and Palate. More than 800 German children with CLP (most of them between 4 and 18 years old) and 380 age matched control speakers spoke the semi-standardized PLAKSS test that consists of words with all German phonemes in different positions. So far 250 CLP speakers were manually transcribed, 120 of these were analyzed by a speech therapist and 27 of them by four additional therapists. The tharapists marked 6 different processes/criteria like pharyngeal backing and hypernasality which typically occur in speech of people with CLP. We present detailed statistics about the the marked processes and the inter-rater agreement.
This paper describes the acquisition, transcription and annotation of a multi-media corpus of academic spoken English, the LMELectures. It consists of two lecture se-ries that were read in the summer term 2009 at the com-puter science department of the University of Erlangen-Nuremberg, covering topics in pattern analysis, machine learning and interventional medical image processing. In total, about 40 hours of high-definition audio and video of a single speaker was acquired in a constant recording en-vironment. In addition to the recordings, the presentation slides are available in machine readable (PDF) format. The manual annotations include a suggested segmenta-tion into speech turns and a complete manual transcrip-tion that was done using BLITZSCRIBE2, a new tool for the rapid transcription. For one lecture series, the lecturer assigned key words to each recordings; one recording of that series was further annotated with a list of ranked key phrases by five human annotators each. The corpus is available for non-commercial purpose upon request.
Cleft Lip and Palate (CLP) is among the most frequent congenital abnormalities. The impaired facial development affects the articulation, with different phonemes being impacted inhomogeneously among different patients. This work focuses on automatic phoneme analysis of children with CLP for a detailed diagnosis and therapy control. In clinical routine, the state-of-the-art evaluation is based on perceptual evaluations. Perceptual ratings act as ground-truth throughout this work, with the goal to build an automatic system that is as reliable as humans. We propose two different automatic systems focusing on modeling the articulatory space of a speaker: one system models a speaker by a GMM, the other system employs a speech recognition system and estimates fMLLR matrices for each speaker. SVR is then used to predict the perceptual ratings. We show that the fMLLR-based system is able to achieve automatic phoneme evaluation results that are in the same range as perceptual inter-rater-agreements.
A growing number of universities and other educational institutions provide recordings of lectures and seminars as an additional resource to the students. In contrast to educational films that are scripted, directed and often shot by film professionals, these plain recordings are typically not post-processed in an editorial sense. Thus, the videos often contain longer periods of inactivity or silence, unnecessary repetitions, or corrections of prior mistakes. This paper describes the FAU Video Lecture Browser system, a web-based platform for the interactive assessment of video lectures, that helps to close the gap between a plain recording and a useful e-learning resource by displaying automatically extracted and ranked key phrases on an augmented time line based on stream graphs. In a pilot study, users of the interface were able to complete a topic localization task about 29 % faster than users provided with the video only while achieving about the same accuracy. The user interactions can be logged on the server to collect data to evaluate the quality of the phrases and rankings, and to train systems that produce customized phrase rankings.
In earlier studies, we assessed the degree of non-nativeness employing prosodic information. In this paper, we combine prosodic information with (1) features derived from a Gaussian Mixture Model used as Universal Background Model (GMM-UBM), a powerful approach used in speaker identification, and (2) openSMILE, a standard open-source toolkit for extracting acoustic features. We evaluate our approach with English speech from 94 non-native speakers. GMM-UBM or openSMILE modelling alone yields lower performance than our prosodic feature vector; however, adding information from the GMM-UBM modelling or openSMILE by late fusion improves results.
Voice scrambling is widely used to add privacy to the radio communication of various authorities - but is also used by criminals to evade prosecution. In this article, we consider various analog voice scrambling techniques such as fixed frequency inversion, splitband inversion and rolling code scramblers. We explain how to break them using automatically extracted measures and scoring algorithms, and evaluate the proposed system using simulated data. While the simple inversion can be easily broken, the more advanced techniques require additional work prior to unsupervised automatization; the presented user interface allows the user to refine the automatic results to obtain a high quality solution.
We present a novel lecture browser that utilizes ranked key phrases displayed on a stream graph to overcome the shortcomings of traditional extractive (query-based) summaries. The system extracts key phrases from the ASR transcripts, performs an unsupervised ranking, and displays an initial number of phrases on the stream graph. This graph gives an intuition of when which key phrase is spoken, and how dominant it is throughout the lecture. The user can select the phrases to be displayed and furthermore adjust the ranking of the all phrases. All user interactions are logged to a server to improve the ranking algorithms and provide user specific rankings.
A growing number of universities offer recordings of lectures, seminars and talks in an online e-learning portal. However, the user is often not interested in the entire recording, but is looking for parts covering a certain topic. Usually, the user has to either watch the whole video or “zap” through the lecture and risk missing important details. We present an integrated web-based platform to help users find relevant sections within recorded lecture videos by providing them with a ranked list of key phrases. For a user-defined subset of these, a StreamGraph visualizes when important key phrases occur and how prominent they are at the given time. To come up with the best key phrase rankings, we evaluate three different key phrase ranking methods using lectures of different topics by comparing automatic with human rankings, and show that human and automatic rankings yield similar scores using Normalized Discounted Cumulative Gain (NDCG).
In this paper, we describe a new Java framework for an easy and efficient way of developing new GUI based speech processing applications. Standard components are provided to display the speech signal, the power plot, and the spectrogram. Furthermore, a component to create a new transcription and to display and manipulate an existing transcription is provided, as well as a component to display and manually correct external pitch values. These Swing components can be easily embedded into own Java programs. They can be synchronized to display the same region of the speech file. The object-oriented design provides base classes for rapid development of own components.