Externe Publikationen
Refine
Year of publication
Document Type
- Article (446)
- conference proceeding (article) (372)
- Part of a Book (152)
- conference proceeding (presentation, abstract) (64)
- conference talk (38)
- Book (37)
- Report (28)
- Working Paper (16)
- Doctoral Thesis (11)
- Review (9)
Language
Is part of the Bibliography
- no (1203)
Keywords
- Gebärmutterhalskrebs (14)
- additive manufacturing (14)
- Bemessung (11)
- Betonbauteil (9)
- Bewehrung (9)
- Kohlenstofffaserverstärkter Kunststoff (9)
- Lamelle (9)
- Tragfähigkeit (9)
- Bildgebendes Verfahren (8)
- Chirurgie (8)
Institute
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (392)
- Fakultät Informatik und Mathematik (281)
- Fakultät Maschinenbau (218)
- Institut für Sozialforschung und Technikfolgenabschätzung (155)
- Labor Empirische Sozialforschung (154)
- Institut für Sozialforschung und Technikfolgenabschätzung (IST) (153)
- Fakultät Bauingenieurwesen (108)
- Fakultät Angewandte Natur- und Kulturwissenschaften (89)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (88)
- Regensburg Center of Health Sciences and Technology - RCHST (58)
Begutachtungsstatus
- peer-reviewed (274)
- begutachtet (30)
We present a toolkit for manipulating andvisualising time-aligned linguistic datasuch as dialogue transcripts or languageprocessing data. The package comple-ments existing editing tools by allowingfor conversion between their formats, in-formation extraction from the raw files,and by adding sophisticated, and easily ex-tended methods for visualising the dynam-ics of dialogue processing. To illustratethe versatility of the package, we describeits use in three different projects at our site.
Generating Situated Assisting Utterances to Facilitate Tactile-Map Understanding: A Prototype System
(2012)
Tactile maps are important substitutes for visual maps for blind and visually impaired people and the efficiency of tactile-map reading can largely be improved by giving assisting utterances that make use of spatial language. In this paper, we elaborate earlier ideas for a system that generates such utterances and present a prototype implementation based on a semantic conceptualization of the movements that the map user performs. A worked example shows the plausibility of the solution and the output that the prototype generates given input derived from experimental data.
If we can model the cognitive and communicative processes underlying speech, we should be able to better predict what a speaker will do. With this idea as inspiration, we examine a number of prosodic and timing features as potential sources of information on what words the speaker is likely to say next. In spontaneous dialog we find that word probabilities do vary with such features. Using perplexity as the metric, the most informative of these included recent speaking rate, volume, and pitch, and time until end of utterance. Using simple combinations of such features to augment trigram language models gave up to a 8.4% perplexity benefit on the Switchboard corpus, and up to a 1.0% relative reduction in word error rate (0.3% absolute) on the Verbmobil II corpus.
We present a model of semantic processing of spoken language that (a) is robust against ill-formed input, such as can be expected from automatic speech recognisers, (b) respects both syntactic and pragmatic constraints in the computation of most likely interpretations, (c) uses a principled, expressive semantic representation formalism (RMRS) with a well-defined model theory, and (d) works continuously (producing meaning representations on a word-by-word basis, rather than only for full utterances) and incrementally (computing only the additional contribution by the new word, rather than re-computing for the whole utterance-so-far). We show that the joint satisfaction of syntactic and pragmatic constraints improves the performance of the NLU component (around 10 % absolute, over a syntax-only baseline).
In this paper we do two things: a) we discuss in general terms the task of incre mental reference resolution (IRR), in particular resolution of exophoric reference, and specify metrics for measuring the performance of dialogue system components tackling this task, and b) we present a simple Bayesian filtering model of IRR that performs reasonably well just using words directly (no structure information and no hand-coded semantics): it picks the right referent out of 12 for around 50 % of real world dialogue utterances in our test corpus. It is also able to learn to interpret not only words but also hesitations, just as humans have shown to do in similar situations, namely as markers of references tohard-to-describe entities.
We describe work done at three sites on designing conversational agents capable of incremental processing. We focus on the middleware layer in these systems, which takes care of passing around and maintaining incremental information between the modules of such agents. All implementations are based on the abstract model of incremental dialogue processing proposed by Schlangen and Skantze (2009), and the paper shows what different instantiations of the model can look like given specific requirements and application areas.
This project aimed to establish the feasibility of creating a procedural system for generating expressive facial animation based on an affective agent. A procedural system supporting a limited number of emotional expression changes was created alongside keyframed animations of these same emotional expression changes, and audience response to these two approaches was tested empirically. Results seem to partially support the procedural animations generated being comparable with keyframed, in terms of perceptual validity.
Speaking as part of a conversation is different from reading out aloud. Speech synthesis systems, however, are typically developed using assumptions (at least implicitly) that are more true of the latter than the former situation. We address one particular aspect, which is the assumption that a fully formulated sentence is available for synthesis. We have built a system that does not make this assumption but rather can synthesize speech given incrementally extended input. In an evaluation experiment, we found that in a dynamic domain where what is talked about changes quickly, subjects rated the output of this system as more ‘naturally pronounced’ than that of a baseline system that employed standard synthesis, despite the synthesis quality objectively being degraded. Our results highlight the importance of considering a synthesizer’s ability to support interactive use-cases when determining the adequacy of synthesized speech.
Incremental speech synthesis (iSS) accepts input and produces output in consecutive chunks that only together result in a full utterance. Systems that use iSS thus have the ability to adapt their utterances while they are ongoing. Having available less than the full utterance to plan the acoustic realisation has downsides, however, as global optimisation is not possible anymore. In this paper we present a strategy for incrementalizing the symbolic pre-processing component of speech synthesis and assess the influence of a reduction in "lookahead", i. e. in knowledge about the rest of the utterance, on prosodic quality. We found that high quality incremental output can be achieved even with a lookahead of slightly less than one phrase, allowing for timely system reaction.