<?xml version="1.0" encoding="utf-8"?>
<export-example>
  <doc>
    <id>268</id>
    <completedYear>2012</completedYear>
    <publishedYear/>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>392</pageFirst>
    <pageLast>397</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>IEEE</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-06-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">The FAU Video Lecture Browser System</title>
    <abstract language="eng">A growing number of universities and other educational institutions provide recordings of lectures and seminars as an additional resource to the students. In contrast to educational films that are scripted, directed and often shot by film professionals, these plain recordings are typically not post-processed in an editorial sense. Thus, the videos often contain longer periods of inactivity or silence, unnecessary repetitions, or corrections of prior mistakes. This paper describes the FAU Video Lecture Browser system, a web-based platform for the interactive assessment of video lectures, that helps to close the gap between a plain recording and a useful e-learning resource by displaying automatically extracted and ranked key phrases on an augmented time line based on stream graphs. In a pilot study, users of the interface were able to complete a topic localization task about 29 % faster than users provided with the video only while achieving about the same accuracy. The user interactions can be logged on the server to collect data to evaluate the quality of the phrases and rankings, and to train systems that produce customized phrase rankings.</abstract>
    <parentTitle language="eng">2012 IEEE Spoken Language Technology Workshop (SLT), Miami, FL, USA, December 2012.</parentTitle>
    <enrichment key="review.accepted_by">2</enrichment>
    <author>Korbinian Riedhammer</author>
    <author>Martin Gropp</author>
    <author>Elmar Nöth</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>automatic speech recognition</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>key phrase extraction</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>key phrase ranking</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>visualization</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>user interaction</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="">Fakultät für Informatik</collection>
    <thesisPublisher>Technische Hochschule Rosenheim</thesisPublisher>
  </doc>
  <doc>
    <id>269</id>
    <completedYear>2012</completedYear>
    <publishedYear/>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-06-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">The Automatic Assessment of Non-native Prosody: Combining Classical Prosodic Analysis with Acoustic Modelling</title>
    <abstract language="eng">In earlier studies, we assessed the degree of non-nativeness employing prosodic information. In this paper, we combine prosodic information with (1) features derived from a Gaussian Mixture Model used as Universal Background Model (GMM-UBM), a powerful approach used in speaker identification, and (2) openSMILE, a standard open-source toolkit for extracting acoustic features. We evaluate our approach with English speech from 94 non-native speakers. GMM-UBM or openSMILE modelling alone yields lower performance than our prosodic feature vector; however, adding information from the GMM-UBM modelling or openSMILE by late fusion improves results.</abstract>
    <parentTitle language="eng">INTERSPEECH 2012, 13th Annual Conference of the International Speech Communication Association (ISCA), Portland, OR, USA, September 2012.</parentTitle>
    <enrichment key="review.accepted_by">2</enrichment>
    <author>Florian Hönig</author>
    <author>Tobias Bocklet</author>
    <author>Korbinian Riedhammer</author>
    <author>Anton Batliner</author>
    <author>Elmar Nöth</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>computer-assisted language learning</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>non-native prosody</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>rhythm</value>
    </subject>
    <subject>
      <language>deu</language>
      <type>uncontrolled</type>
      <value>automatic assessment</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="">Fakultät für Informatik</collection>
    <thesisPublisher>Technische Hochschule Rosenheim</thesisPublisher>
  </doc>
  <doc>
    <id>270</id>
    <completedYear>2012</completedYear>
    <publishedYear/>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>8349</pageFirst>
    <pageLast>8353</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-06-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">A Software Kit for Automatic Voice Descrambling</title>
    <abstract language="eng">Voice scrambling is widely used to add privacy to the radio communication of various authorities - but is also used by criminals to evade prosecution. In this article, we consider various analog voice scrambling techniques such as fixed frequency inversion, splitband inversion and rolling code scramblers. We explain how to break them using automatically extracted measures and scoring algorithms, and evaluate the proposed system using simulated data. While the simple inversion can be easily broken, the more advanced techniques require additional work prior to unsupervised automatization; the presented user interface allows the user to refine the automatic results to obtain a high quality solution.</abstract>
    <parentTitle language="eng">2012 IEEE International Conference on Communications (ICC); IEEE International Workshop on Security and Forensics in Communication Systems (SFCS).</parentTitle>
    <enrichment key="review.accepted_by">2</enrichment>
    <author>Korbinian Riedhammer</author>
    <author>Martin Ring</author>
    <author>Elmar Nöth</author>
    <author>Daniel Kolb</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Speech Recognition</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="">Fakultät für Informatik</collection>
    <thesisPublisher>Technische Hochschule Rosenheim</thesisPublisher>
  </doc>
  <doc>
    <id>271</id>
    <completedYear>2012</completedYear>
    <publishedYear/>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>4721</pageFirst>
    <pageLast>4724</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName>IEEE</publisherName>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-06-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Revisiting Semi-Continuous Hidden Markov Models</title>
    <abstract language="eng">In the past decade, semi-continuous hidden Markov models (SCHMMs) have not attracted much attention in the speech recognition community. Growing amounts of training data and increasing sophistication of model estimation led to the impression that continuous HMMs are the best choice of acoustic model. However, recent work on recognition of under-resourced languages faces the same old problem of estimating a large number of parameters from limited amounts of transcribed speech. This has led to a renewed interest in methods of reducing the number of parameters while maintaining or extending the modeling capabilities of continuous models. In this work, we compare classic and multiple-codebook semi-continuous models using diagonal and full covariance matrices with continuous HMMs and subspace Gaussian mixture models. Experiments on the RM and WSJ corpora show that while a classical semicontinuous system does not perform as well as a continuous one, multiple-codebook semi-continuous systems can perform better, particular when using full-covariance Gaussians.</abstract>
    <parentTitle language="eng">2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, March 2012.</parentTitle>
    <enrichment key="review.accepted_by">2</enrichment>
    <author>Korbinian Riedhammer</author>
    <author>Tobias Bocklet</author>
    <author>Arnab Goshal</author>
    <author>Daniel Povey</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>automatic speech recognition</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>acoustic modeling</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="">Fakultät für Informatik</collection>
    <thesisPublisher>Technische Hochschule Rosenheim</thesisPublisher>
  </doc>
  <doc>
    <id>272</id>
    <completedYear>2012</completedYear>
    <publishedYear/>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>4213</pageFirst>
    <pageLast>4216</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-06-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Generating Exact Lattices in the WFST Framework</title>
    <abstract language="deu">We describe a lattice generation method that is exact, i.e. it satisfies all the natural properties we would want from a lattice of alternative transcriptions of an utterance. This method does not introduce substantial overhead above one-best decoding. Our method is most directly applicable when using WFST decoders where the WFST is “fully expanded”, i.e. where the arcs correspond to HMM transitions. It outputs lattices that include HMM-state-level alignments as well as word labels. The general idea is to create a state-level lattice during decoding, and to do a special form of determinization that retains only the best-scoring path for each word sequence. This special determinization algorithm is a solution to the following problem: Given a WFST A, compute a WFST B that, for each input-symbol-sequence of A, contains just the lowest-cost path through A.</abstract>
    <parentTitle language="eng">2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, May 2012.</parentTitle>
    <enrichment key="review.accepted_by">2</enrichment>
    <author>Daniel Povey</author>
    <author>Mirko Hannemann</author>
    <author>Gilles Boulianne</author>
    <author>Lukáš Burget</author>
    <author>Arnab Ghoshal</author>
    <author>Miloš Janda</author>
    <author>Martin Karafiát</author>
    <author>Stefan Kombrink</author>
    <author>Petr Motlíček</author>
    <author>Yanmin Qian</author>
    <author>Korbinian Riedhammer</author>
    <author>Karel Veselý</author>
    <author>Ngoc Thang Vu</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Speech Recognition</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Lattice Generation</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="">Fakultät für Informatik</collection>
    <thesisPublisher>Technische Hochschule Rosenheim</thesisPublisher>
  </doc>
  <doc>
    <id>296</id>
    <completedYear>2012</completedYear>
    <publishedYear/>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>book</type>
    <publisherName>Logos Verlag Berlin GmbH</publisherName>
    <publisherPlace>Berlin</publisherPlace>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-06-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Interactive Approaches to Video Lecture Assessment</title>
    <abstract language="eng">Folks that have been here last winter prior to ASRU might be familiar with the title of that talk. But don't be misled, I'll have something new for you. In this talk, I will give an overview over the FAU Lecture Browser which I developed in the context of my thesis. I will start out with the description of a novel data set: The LME Lectures are a corpus of two series of graduate level computer science lectures with 18 recordings each. The courses cover topics in medical image processing and pattern analysis/machine learning. The roughly 40 hours of speech were manually transcribed, and one particular lecture was annotated with key phrases by five human raters. Using this data set, I trained three different speech recognizers using regular continuous, multi-codebook semi-continuous and subspace Gaussian mixture models, that show an error rate of about 10% WER. I will then briefly describe the key phrase extraction and automatic ranking, which was then compared against five raters on one lecture recording. Finally, I will talk about a little usability study where 10 students were asked to perform a certain task-- with and without the proposed lecture browser. Although the number of contestants is limited, the numbers are interesting: the users that had the interface could complete the tasks about 30% faster than the control group, while maintaining about the same accuracy.</abstract>
    <enrichment key="review.accepted_by">2</enrichment>
    <author>Korbinian Riedhammer</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Speech Recognition</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="">Fakultät für Informatik</collection>
    <thesisPublisher>Technische Hochschule Rosenheim</thesisPublisher>
  </doc>
  <doc>
    <id>307</id>
    <completedYear>2012</completedYear>
    <publishedYear/>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>390</pageFirst>
    <pageLast>397</pageLast>
    <pageNumber/>
    <edition/>
    <issue>26(3)</issue>
    <volume/>
    <type>contributiontoperiodical</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>2018-06-04</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Automatic Intelligibility Assessment of Speakers After Laryngeal Cancer by Means of Acoustic Modeling</title>
    <abstract language="eng">One aspect of voice and speech evaluation after laryngeal cancer is acoustic analysis. Perceptual evaluation by expert raters is a standard in the clinical environment for global criteria such as overall quality or intelligibility. So far, automatic approaches evaluate acoustic properties of pathologic voices based on voiced/unvoiced distinction and fundamental frequency analysis of sustained vowels. Because of the high amount of noisy components and the increasing aperiodicity of highly pathologic voices, a fully automatic analysis of fundamental frequency is difficult. We introduce a purely data-driven system for the acoustic analysis of pathologic voices based on recordings of a standard text.</abstract>
    <parentTitle language="eng">Journal of Voice</parentTitle>
    <enrichment key="review.accepted_by">2</enrichment>
    <author>Tobias Bocklet</author>
    <author>Korbinian Riedhammer</author>
    <author>Elmar Nöth</author>
    <author>Ulrich Eysholdt</author>
    <author>Tino Haderlein</author>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Speech Recognition</value>
    </subject>
    <subject>
      <language>eng</language>
      <type>uncontrolled</type>
      <value>Lung cancer technologies</value>
    </subject>
    <collection role="ddc" number="000">Informatik, Informationswissenschaft, allgemeine Werke</collection>
    <collection role="institutes" number="">Fakultät für Informatik</collection>
    <thesisPublisher>Technische Hochschule Rosenheim</thesisPublisher>
  </doc>
</export-example>
