Interactive Approaches to Video Lecture Assessment

  • Folks that have been here last winter prior to ASRU might be familiar with the title of that talk. But don't be misled, I'll have something new for you. In this talk, I will give an overview over the FAU Lecture Browser which I developed in the context of my thesis. I will start out with the description of a novel data set: The LME Lectures are a corpus of two series of graduate level computer science lectures with 18 recordings each. The courses cover topics in medical image processing and pattern analysis/machine learning. The roughly 40 hours of speech were manually transcribed, and one particular lecture was annotated with key phrases by five human raters. Using this data set, I trained three different speech recognizers using regular continuous, multi-codebook semi-continuous and subspace Gaussian mixture models, that show an error rate of about 10% WER. I will then briefly describe the key phrase extraction and automatic ranking, which was then compared against five raters on one lecture recording. Finally, I will talk about a little usability study where 10 students were asked to perform a certain task-- with and without the proposed lecture browser. Although the number of contestants is limited, the numbers are interesting: the users that had the interface could complete the tasks about 30% faster than the control group, while maintaining about the same accuracy.

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Korbinian Riedhammer
Publisher:Logos Verlag Berlin GmbH
Place of publication:Berlin
Document Type:Book
Language:English
Publication Year:2012
Tag:Speech Recognition
faculties / departments:Fakultät für Informatik
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 000 Informatik, Informationswissenschaft, allgemeine Werke