TY - CHAP A1 - Schaffer, Stefan A1 - Schwaetzer, Eva A1 - Ruß, Aaron A1 - Gustke, Oliver ED - Baumann, Timo T1 - Chatbot in the Museum – A Field Study of User Experience and Modality Usage T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - This paper describes a field study conducted with a museum chatbot at the Städel Museum Frankfurt. The chatbot uses the BERT language model for natural language processing and can be operated via touchscreen as well as via speech input. Prior to the study, hypotheses regarding the user experience of the system were formulated and a system-specific questionnaire was designed, which was used to inquire (among other things) about the perceived quality of the speech output and the frequency of audio guide use in museums. During the interaction with the chatbot, log data was collected and stored in the back-end system. The results show a significant correlation between perceived speech quality and user experience. An exploratory data analysis revealed that participants who used only speech input rated the system as significantly more stimulating than participants who used only touch input. Touch input turned out to be the most efficient input modality in terms of answer correctness and was rated highest regarding pragmatic quality. Interestingly touch input was preferred by younger participants. We discuss our findings and conclude that speech interaction should be seriously considered to create engaging conversational user experiences in museums. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70758 SN - 978-3-95908-325-6 SP - 14 EP - 21 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Possamai de Menezes, João Vítor A1 - Kleiner, Christian A1 - Kainz, Marie-Anne A1 - Echternach, Matthias A1 - Birkholz, Peter ED - Baumann, Timo T1 - Synchrony of Glottal Area Waveform Parameters During the Production of Obstruents in Vowel Context T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - Obstruents are phonemes which require partial or total obstruction of airflow through the vocal tract. Their articulation also requires adjustments of the laryngeal settings, e. g., an abduction gesture to stop vocal fold vibration for voiceless obstruents. This study investigated the laryngeal settings during the production of voiced and voiceless obstruents in vowel context to analyze the degree of synchrony of the involved glottal gestures. High-speed laryngoscopy images were used to determine the glottal area waveform, from which the time functions of the parameters open quotient (OQ), fundamental frequency (f0), and AC and DC amplitude (ACA and DCA) were calculated and analyzed. Significant correlations were found between all pairs of parameters, with strong correlations between some of them, e.g. Open Quotient and AC Amplitude. Correlations were also either consistently positive or negative for specific pairs of parameters across all investigated phonemes. These results could point to consistent patterns in laryngeal gestures that could enhance articulatory speech synthesis. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70808 SN - 978-3-95908-325-6 SP - 54 EP - 61 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Hillmann, Stefan A1 - Kowol, Philine A1 - Ahmad, Adnan A1 - Tang, Ruochen A1 - Möller, Sebastian ED - Baumann, Timo T1 - Usability and User Experience of a Chatbot for Student Support T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - This paper describes the usability evaluation of the parts of the CHATU chatbot. The evaluation was conducted with 21 participants. A focus of this paper is the description of the carefully designed evaluation procedure, which aims to avoid textual priming of the participants. The general evaluation procedure can be applied to other speech- or text-based conversational systems, and additional material is provided. The evaluation results show that the usability and user experience of CHATU are positively rated. However, the naturalness and novelty of the interaction are not optimal, and the potential influence of users’ experience with LLMs on the evaluation is discussed. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70760 SN - 978-3-95908-325-6 SP - 22 EP - 29 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Kruspe, Anna ED - Baumann, Timo T1 - More than words: Advancements and Challenges in Speech Recognition for Singing T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - This paper addresses the challenges and advancements in speech recognition for singing, a domain distinctly different from standard speech recognition. Singing encompasses unique challenges, including extensive pitch variations, diverse vocal styles, and background music interference. We explore key areas such as phoneme recognition, language identification in songs, keyword spotting, and full lyrics transcription. I will describe some of my own experiences when performing research on these tasks just as they were starting to gain traction, but will also show how recent developments in deep learning and large-scale datasets have propelled progress in this field. My goal is to illuminate the complexities of applying speech recognition to singing, evaluate current capabilities, and outline future research directions. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70748 SN - 978-3-95908-325-6 SP - 1 EP - 10 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Schuler, Christian A1 - Nayak, Shravan A1 - Saha, Debjoy A1 - Baumann, Timo ED - Baumann, Timo T1 - Can We See Your Response Before You Speak? Exploring Linguistic Information Found in Inter-Utterance Pauses T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - In this work we assess whether there is information in pauses in-between utterances of the same or different speakers that are predictive of the following speaker’s utterance. We present models that connect a person’s visual features before they speak to their upcoming utterance. In our experiments we find that outof-the-box pre-trained models can already reach a better-than-chance performance in correlating video embeddings to utterance embeddings. In contrast, models that attempt to predict the first word after the pause do not outperform a unigram model, indicating that our models do not read lips (based e.g. on co-articulation effects) but rather capture more fundamental aspects of the upcoming utterance. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70949 SN - 978-3-95908-325-6 SP - 165 EP - 172 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Schubert, Martha A1 - Sinha, Yamini A1 - Krüger, Julia A1 - Siegert, Ingo ED - Baumann, Timo T1 - Speech Recognition Errors in ASR Engines and Their Impact on Linguistic Analysis in Psychotherapies T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - Modern intervention planning in psychotherapies may benefit from predicting process relevant psychotherapy constructs by automated speech analysis. One essential step is the extraction of relevant linguistic speech markers by ASR engines, which because of highly sensible data, work offline. We analyze transcription errors from NeMo, Whisper, and Wav2Vec2.0, focusing on their impact on linguistic markers usually requiring high quality transcripts. By utilizing part-of-speech tagging, we examine error occurrences among different word types. The Linguistic Inquiry and Word Count (LIWC) software aids in extracting markers. We highlight challenges in transcribing spontaneous speech, prevalent in therapy, and compare results with the Mozilla CommonVoice dataset, which features read speech. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70999 SN - 978-3-95908-325-6 SP - 203 EP - 210 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Harnisch, Philipp L. A1 - Hillmann, Stefan ED - Baumann, Timo T1 - Empirical Evaluation of ASR and NLU in a Multimodal Dialogue System for Survey Answering T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - PROM surveys, used to measure the effect of rehabilitation treatments, are typically filled out on paper, and often suffer from low response rates. Replacing it with a multimodal survey system, supporting touch and speech interaction, could lead to lower hurdles and therefore more data quantity. To do this, it requires task-specific training samples for the Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) to classify spoken answers into one of the standardized PROM answer options. Due to the lack of training data for medical PROM surveys, we created augmented text samples with each answer option description, combined with different templates. To improve training capabilities, introduce a proper test set, and evaluate the ASR, we also collected 1,797 real voice samples within an empirical study. Further, we incorporate the contextual knowledge of the current question into our NLU architecture by implementing one classifier for every question scale. Our results reveal that training with empirical data leads to better results than augmented data from templates and original answer option descriptions. Because of participant mislabeling of 33% due to the ambiguity of the task, we receive overall low NLU performances with up to 51.1% accuracy, and rank-1-accuracy up to 79.3%. We also find that our implementation of many scale-specific NLU classifiers significantly outperforms one NLU classifier for all labels, that incorporates the same contextual knowledge after the prediction, by 8 percent points. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-71007 SN - 978-3-95908-325-6 SP - 211 EP - 218 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Venkateswaran, Siddarth A1 - Al Foysal, Abdullah A1 - Shaik, Nazeer Basha A1 - Böck, Ronald ED - Baumann, Timo T1 - Is there Text in Wine? – S+U Learning-based Named Entity Recognition and Triplet Extraction from Wine Aroma Descriptors T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - Wine making is usually considered a domain being far off the processing of speech and language. But in a particular aspect, the domains of speech processing and wine making are related, namely, in the description of wine aromas. These descriptors are used for creating wine expertise as well as more general (advertisement-like) textual representations. In the current paper, we use Natural Language Processing techniques, especially Named Entity Recognition, to identify Aspects and Opinions, reflecting wine characteristics. These are combined with analyses of respective relations (triplet extraction) building Aspect-Opinion-Pairs to establish indicative aroma descriptors, also trying to approach the complex interplay amongst these individual statements. In our experiments, we rely on the Falstaff corpus comprising a huge set of wine descriptions. This results in an average F1 score of around 0.85 for Aspect-Opinion classification. For triplet generation multiple strategies were compared, resulting in an average F1 score of 0.67 in this challenging task. For both tasks we rely only on a handful of manually annotated samples, applying pseudo-labeling methods from seed data to achieve automatic labeling. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70931 SN - 978-3-95908-325-6 SP - 157 EP - 164 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Bauer, Judith A1 - Zalkow, Frank A1 - Müller, Meinard A1 - Dittmar, Christian ED - Baumann, Timo T1 - Evaluating the Impact of Prosody Feature Normalization on the Controllability of Pitch in Speech Synthesis T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - Recent neural text-to-speech (TTS) models are able to synthesize highly natural speech signals using deep learning techniques. In practical applications, it can be desirable to have explicit control over the prosody (speech rate, fundamental frequency, and energy) of the synthesized speech. Such controllability can be achieved by adding prosody prediction modules, whose main purpose is to estimate plausible prosody features for each phoneme in the text input. This explicit modeling also allows for changing prosody features at inference time, consequently enabling the adjustment of the prosody in the synthesized audio. In this paper, we evaluate to which extent deliberate manipulation of such prosody features is reflected in the resulting speech audio. We focus particularly on changing the pitch (i.e., fundamental frequency) while applying different normalization strategies. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70976 SN - 978-3-95908-325-6 SP - 188 EP - 195 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Sinha, Yamini A1 - Hintz, Jan A1 - Siegert, Ingo ED - Baumann, Timo T1 - Evaluation of Audio Deepfakes – Systematic Review T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - Generative models for audio are commonly used for music composition, sound effects generation for video game development, audio restoration, voice cloning, etc. The ease of generating indistinguishable fake audio with deep learning poses a major threat to personal privacy, online security, and political discourse. Evaluating the quality and realism of these synthetic utterances is crucial for mitigating the potential for misinformation and harm. To assess this threat, this paper conducts a systematic review, using Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), on how these deepfake models are currently evaluated. The analysis of 86 papers shows that the majority of the evaluation is conducted on a machine level and highlights a research gap regarding the human perception of deepfakes. This paper explores various methods and perceptual measures employed in assessing audio deepfakes and evaluating their strengths, limitations, and future directions. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70960 SN - 978-3-95908-325-6 SP - 181 EP - 187 PB - TUDpress CY - Dresden ER - TY - CHAP A1 - Sering, Konstantin ED - Baumann, Timo T1 - Speech/Non-Speech Classification Slightly Improves Synthesis Quality in PAULE T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - One of the tasks PAULE[1, 2] solves is finding suitable control parameter (cp-)trajectories for a given target acoustic. These cp-trajectories can be used to synthesize speech with the articulatory speech synthesizer of the VocalTractLab (VTL) [3]. If the target acoustic contains substantial microphone noise or other background noises, occasionally PAULE optimizes not for the speech in the target, but for this background noises. By adding a speech/non-speech classifier to the feedback and planning-loop in PAULE this resynthesis of background noises should be mitigated. Unfortunately, the improvements were minor, which might be due to uninformative gradients of the classifier. The importance of informative gradients and the use classifiers to adapt PAULE to different tasks are explained and discussed. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70955 SN - 978-3-95908-325-6 SP - 173 EP - 180 PB - TUDpress CY - Dresden ER -