Refine
Year of publication
Document Type
Is part of the Bibliography
- no (59)
Keywords
- incremental processing (3)
- prosody (3)
- speech synthesis (3)
- dubbing (2)
- free verse poetry (2)
- free verse prosody (2)
- modern and postmodern poetry (2)
- simultaneous interpreting (2)
- spoken dialogue systems (2)
- timing (2)
Institute
Begutachtungsstatus
- peer-reviewed (59) (remove)
Incremental spoken dialogue systems, which process user input as it unfolds, pose additionalengineering challenges compared to more standard non-incremental systems: Their processingcomponents must be able to accept partial, and possibly subsequently revised input, and mustproduce output that is at the same time as accurate as possible and delivered with as little delay aspossible. In this article, we define metrics that measure how well a given processor meets thesechallenges, and we identify types of gold standards for evaluation. We exemplify these metrics inthe evaluation of several incremental processors that we have developed. We also present genericmeans to optimise some of the measures, if certain trade-offs are accepted. We believe that thiswork will help enable principled comparison of components for incremental dialogue systems andportability of results.
We present a toolkit for manipulating andvisualising time-aligned linguistic datasuch as dialogue transcripts or languageprocessing data. The package comple-ments existing editing tools by allowingfor conversion between their formats, in-formation extraction from the raw files,and by adding sophisticated, and easily ex-tended methods for visualising the dynam-ics of dialogue processing. To illustratethe versatility of the package, we describeits use in three different projects at our site.
Comparing Local and Sequential Models for Statistical Incremental Natural Language Understanding
(2010)
Incremental natural language understanding is the task of assigning semantic representations to successively larger prefixes of utterances. We compare two types of statistical models for this task: a) local models, which predict a single class for an input; and b), sequential models, which align a sequence of classes to a sequence of input tokens. We show that, with some modifications, the first type of model can be improved and made to approximate the output of the second, even though the latter is more informative. We show on two different data sets that both types of model achieve comparable performance (significantly better than a baseline), with the first type requiring simpler training data. Results for the first type of model have been reported in the literature; we show that for our kind of data our more sophisticated variant of the model performs better.
Participants in a conversation are normally receptive to their surroundings and their interlocutors, even while they are speaking and can, if necessary, adapt their ongoing utterance. Typical dialogue systems are not receptive and cannot adapt while uttering. We present combin-able components for incremental natural lan-guage generation and incremental speech syn-thesis and demonstrate the flexibility they can achieve with an example system that adapts to a listener's acoustic understanding problems by pausing, repeating and possibly rephrasing problematic parts of an utterance. In an evaluation, this system was rated as significantly more natural than two systems representing the current state of the art that either ignore the interrupting event or just pause; it also has a lower response time.
When dialogue systems, through theuse of incremental processing, arenot bounded anymore by strict, non-overlapping turn-taking, a whole range ofadditional interactional devices becomesavailable. We explore the use of one suchdevice, trial intonation. We elaborateour approach to dialogue managementin incremental systems, based on theInformation-State-Update approach, anddiscuss an implementation in a micro-domain that lends itself to the use ofimmediate feedback, trial intonations andexpansions. In an overhearer evaluation,the incremental system was judged as sig-nificantly more human-like and reactivethan a non-incremental version.
If we can model the cognitive and communicative processes underlying speech, we should be able to better predict what a speaker will do. With this idea as inspiration, we examine a number of prosodic and timing features as potential sources of information on what words the speaker is likely to say next. In spontaneous dialog we find that word probabilities do vary with such features. Using perplexity as the metric, the most informative of these included recent speaking rate, volume, and pitch, and time until end of utterance. Using simple combinations of such features to augment trigram language models gave up to a 8.4% perplexity benefit on the Switchboard corpus, and up to a 1.0% relative reduction in word error rate (0.3% absolute) on the Verbmobil II corpus.
Holding non-co-located conversationswhile driving is dangerous (Horrey and- Wickens, 2006; Strayer et al., 2006), much more so than conversations with physically present, “situated” interlocutors
(Drews et al., 2004). In-car dialogue systems typically resemble non-co-located conversations more, and share their negative impact (Strayer et al., 2013). We implemented and tested a simple strategy
for making in-car dialogue systems aware of the driving situation, by giving them the capability to interrupt themselves when a dangerous situation is detected,and resume when over. We show that this improves both driving performance and recall of system-presented information,
compared to a non-adaptive strategy.
When humans speak, they do not plan their full utterance inall detail before beginning to speak, nor do they speak piece-by-piece and ignoring their full message – instead humans usepartial representations in which they fill in the missing partsas the utterance unfolds. Incremental speech synthesizers, incontrast, have not yet made use of partial representations and theinformation contained there-in.We analyze the quality of prosodic parameter assignments(pitch and duration) generated from partial utterance specifi-cations (substituting defaults for missing features) in order todetermine the requirements that symbolic incremental prosodymodelling should meet. We find that broader, higher-level infor-mation helps to improve prosody even if lower-level informationabout the near future is yet unavailable. Furthermore, we findthat symbolic phrase-level or utterance-level information is mosthelpful towards the end of the phrase or utterance, respectively,that is, when this information is becoming available even in theincremental case. Thus, the negative impact of incremental pro-cessing can be minimized by using partial representations thatare filled in incrementally.
Automatic speech recognition (ASR) technology has been developed to such a level that off-the-shelf distributed speech recognition services are available (free of cost), which allow researchers to integrate speech into their applications with little development effort or expert knowledge leading to better results compared with previously used open-source tools.
Often, however, such services do not accept language models or grammars but process free speech from any domain. While results are very good given the enormous size of the search space, results frequently contain out-of-domain words or constructs that cannot be understood by subsequent domain-dependent natural language understanding (NLU) components. We present a versatile post-processing technique based on phonetic distance that integrates domain knowledge with open-domain ASR results, leading to improved ASR performance. Notably, our technique is able to make use of domain restrictions using various degrees of domain knowledge, ranging from pure vocabulary restrictions via grammars or N-Grams to restrictions of the acceptable utterances. We present results for a variety of corpora (mainly from human-robot interaction) where our combined approach significantly outperforms Google ASR as well as a plain open-source ASR solution.
Human speakers plan and deliver their utterances incrementally, piece-by-piece, and it is obvious that their choice regarding phonetic details (and the details' peculiarities) is rarely determined by globally optimal solutions. In contrast, parametric speech synthesizers use a full-utterance context when optimizing vocoding parameters and when determing HMM states. Apart from being cognitively implausible, this impedes incremental use-cases, where the future context is often at least partially unavailable. This paper investigates the `locality' of features in parametric speech synthesis voices and takes some missing steps towards better HMM state selection and prosody modelling for incremental speech synthesis.