TY - CHAP A1 - Braun, Franziska A1 - Bayerl, Sebastian P. A1 - Pérez-Toro, Paula A. A1 - Hönig, Florian A1 - Lehfeld, Hartmut A1 - Hillemacher, Thomas A1 - Nöth, Elmar A1 - Bocklet, Tobias A1 - Riedhammer, Korbinian T1 - Classifying Dementia in the Presence of Depression BT - A Cross-Corpus Study N2 - Automated dementia screening enables early detection and intervention, reducing costs to healthcare systems and increasing quality of life for those affected. Depression has shared symptoms with dementia, adding complexity to diagnoses. The research focus so far has been on binary classification of dementia (DEM) and healthy controls (HC) using speech from picture description tests from a single dataset. In this work, we apply established baseline systems to discriminate cognitive impairment in speech from the semantic Verbal Fluency Test and the Boston Naming Test using text, audio and emotion embeddings in a 3-class classification problem (HC vs. MCI vs. DEM). We perform cross-corpus and mixed-corpus experiments on two independently recorded German datasets to investigate generalization to larger populations and different recording conditions. In a detailed error analysis, we look at depression as a secondary diagnosis to understand what our classifiers actually learn. Y1 - 2023 U6 - https://doi.org/10.48550/arXiv.2308.08306 ER - TY - CHAP A1 - Riedhammer, Korbinian A1 - Baumann, Ilja A1 - Bayerl, Sebastian P. A1 - Bocklet, Tobias A1 - Braun, Franziska A1 - Wagner’, Dominik T1 - Medical Speech Processing for Diagnosis and Monitoring BT - Clinical Use Cases N2 - In recent years, speech processing for medical applications got significant traction. While pioneering work in the 1990ies focused on processing sustained vowels or isolated utterances, work in the 2000s already showed, that speech recognition systems, prosodic analysis and natural language processing be used to assess a large variety of speech pathologies.Here, we give an overview of how to classify selected speech pathologies including stuttering, language development, speech intelligibility after surgery, dementia and Alzheimers, depression and state-of-mind. While each of those poses a rather well-defined problem in a lab setting, we discuss the issues when integrating such methods in a clinical workflow such as diagnosis or monitoring. Starting from the question if such detectors can be used for general screening or rather as a specialist's tool, we explore the legal and privacy-related implications: patient-doctor conversations, working with children or demented seniors, bias towards examiner or patient, on-device vs. cloud processing.We conclude with a set of open questions that should be addressed to help bringing all this research from the lab to routine clinical use. Y1 - 2023 SP - 1417 EP - 1420 ER - TY - CHAP A1 - Wagner, Dominik A1 - Baumann, Ilja A1 - Braun, Franziska A1 - Bayerl, Sebastian P. A1 - Nöth, Elmar A1 - Riedhammer, Korbinian A1 - Bocklet, Tobias T1 - Multi-class Detection of Pathological Speech with Latent Features BT - How does it perform on unseen data? N2 - The detection of pathologies from speech features is usually defined as a binary classification task with one class representing a specific pathology and the other class representing healthy speech. In this work, we train neural networks, large margin classifiers, and tree boosting machines to distinguish between four pathologies: Parkinson's disease, laryngeal cancer, cleft lip and palate, and oral squamous cell carcinoma. We show that latent representations extracted at different layers of a pre-trained wav2vec 2.0 system can be effectively used to classify these types of pathological voices. We evaluate the robustness of our classifiers by adding room impulse responses to the test data and by applying them to unseen speech corpora. Our approach achieves unweighted average F1-Scores between 74.1% and 97.0%, depending on the model and the noise conditions used. The systems generalize and perform well on unseen data of healthy speakers sampled from a variety of different sources. Y1 - 2023 U6 - https://doi.org/10.21437/Interspeech.2023-464 SN - 2958-1796 SP - 2318 EP - 2322 ER - TY - CHAP A1 - Hintz, Jan A1 - Bayerl, Sebastian P. A1 - Sinha, Yamini A1 - Riedhammer, Korbinian A1 - Siegert, Ingo T1 - Impact of pathological speech on speaker anonymization BT - A Proof of Concept N2 - With the ever-increasing usage of voice assistants, concerns for privacy and data security arise. Speech contains highly personal data that can be exploited for user profiling or identification [1]. On-device speech anonymization can serve as a measure to counteract this [2]. While these anonymization systems are being tested and evaluated through challenges and benchmarks [3], the commonly used datasets include no or only a few individuals with speech impairments, leading to low inclusivity, possible data bias, and privacy concerns for these groups [5]. For anonymization to work, it is crucial to evaluate and counteract bias if needed. Stuttering is a speech disorder with diverse characteristics. The well-known, defining symptoms are blocks, repetition and prolongation of sounds, syllables, and words while speaking [4]. The different primary stuttering symptoms vary strongly in their characteristics and occur over a different time context, making stuttering an ideal candidate to study the effects of pathological speech on the application of anonymization techniques. This paper analyzes the impact of stuttering on speaker anonymization, regarding the level of anonymity and utility. We present two methods to conceal speaker identity, us- ing voice conversion and re-synthesis. Firstly, Voice conversion, a process that adapts the way a source speaker speaks to a target speaker. It preserves some prosody of the source speaker, especially temporal aspects, with the goal of protecting the identity while at the same time preserving pathologic speech patterns. This could be applied in pathology-related processing, such as self-help training applications. Secondly, re-synthesis, based on an automatic speech recognition generating a transcript, which is afterward used to synthesize a new voice by a text-to-speech system. This process disentangles speaker information and text, granting a high level of anonymization. To compare these methods, we use subjective and objective measures. Y1 - 2023 ER - TY - JOUR A1 - Bayerl, Sebastian P. A1 - Gerczuk, Maurice A1 - Batliner, Anton A1 - Bergler, Christian A1 - Amiriparian, Shahin A1 - Schuller, Björn A1 - Nöth, Elmar A1 - Riedhammer, Korbinian T1 - Classification of Stuttering – The ComParE challenge and beyond JF - Computer Speech & Language N2 - The ACM Multimedia 2022 Computational Paralinguistics Challenge (ComParE) featured a sub-challenge on the classification of stuttering in order to bring attention to this important topic and engage a wider research community. Stuttering is a complex speech disorder characterized by blocks, prolongations of sounds and syllables, and repetitions of sounds and words. Accurately classifying the symptoms of stuttering has implications for the development of self-help tools and specialized automatic speech recognition systems (ASR) that can handle atypical speech patterns. This paper provides a review of the challenge contributions and improves upon them with new state-of-the-art classification results for the KSF-C dataset, and explores cross-language training to demonstrate the potential of datasets in multiple languages. To facilitate further research and reproducibility, the full KSF-C dataset, including test-set labels, is also released. KW - Dysfluency Stuttering ComParE challenge Paralinguistics Pathological speech Y1 - 2023 U6 - https://doi.org/10.1016/j.csl.2023.101519 VL - 81 ER - TY - CHAP A1 - Wagner, Dominik A1 - Baumann, Ilja A1 - Bayerl, Sebastian P. A1 - Riedhammer, Korbinian A1 - Bocklet, Tobias T1 - Speaker Adaptation for End-To-End Speech Recognition Systems in Noisy Environments N2 - We analyze the impact of speaker adaptation in end-to-end automatic speech recognition models based on transformers and wav2vec 2.0 under different noise conditions. By including speaker embeddings obtained from x-vector and ECAPA-TDNN systems, as well as i-vectors, we achieve relative word error rate improvements of up to 16.3% on LibriSpeech and up to 14.5% on Switchboard. We show that the proven method of concatenating speaker vectors to the acoustic features and supplying them as auxiliary model inputs remains a viable option to increase the robustness of end-to-end architectures. The effect on transformer models is stronger, when more noise is added to the input speech. The most substantial benefits for systems based on wav2vec 2.0 are achieved under moderate or no noise conditions. Both x-vectors and ECAPA-TDNN embeddings outperform i-vectors as speaker representations. The optimal embedding size depends on the dataset and also varies with the noise condition. KW - speaker adaptation, automatic speech recognition, end-to-end systems, transformer, wav2vec 2.0 Y1 - 2023 U6 - https://doi.org/10.48550/arXiv.2211.08774 ER - TY - CHAP A1 - Wagner, Dominik A1 - Bayerl, Sebastian P. A1 - Bocklet, Tobias A1 - Draxler, Christoph T1 - Implementing Easy-to-Use Recipes for the Switchboard Benchmark N2 - We report on our contribution of templates for tokenization, language modeling, and automatic speech recognition (ASR) on the Switchboard benchmark to the open-source general-purpose toolkit SpeechBrain. Three recipes for the training of end-to-end ASR systems were implemented. We describe their model architectures, as well as the necessary data preparation steps. The word error rates achievable with our models are comparable to or better than those of other popular toolkits. Pre-trained ASR models were made available on HuggingFace. They can be easily integrated into research projects or used directly for quick inference via a hosted inference API. Y1 - 2023 SP - 150 EP - 157 PB - TUDpress, Dresden ER - TY - JOUR A1 - Bayerl, Sebastian P. A1 - Wagner, Dominik A1 - Baumann, Ilja A1 - Bocklet, Tobias A1 - Riedhammer, Korbinian T1 - Detecting Vocal Fatigue with Neural Embeddings JF - Journal of Voice N2 - Vocal fatigue refers to the feeling of tiredness and weakness of voice due to extended utilization. This paper investigates the effectiveness of neural embeddings for the detection of vocal fatigue. We compare x-vectors, ECAPA-TDNN, and wav2vec 2.0 embeddings on a corpus of academic spoken English. Low-dimensional mappings of the data reveal that neural embeddings capture information about the change in vocal characteristics of a speaker during prolonged voice usage. We show that vocal fatigue can be reliably predicted using all three types of neural embeddings after 40 minutes of continuous speaking when temporal smoothing and normalization are applied to the extracted embeddings. We employ support vector machines for classification and achieve accuracy scores of 81% using x-vectors, 85% using ECAPA-TDNN embeddings, and 82% using wav2vec 2.0 embeddings as input features. We obtain an accuracy score of 76%, when the trained system is applied to a different speaker and recording environment without any adaptation. KW - Vocal fatigue KW - Neural embeddings KW - Visualization KW - Detection Y1 - 2023 U6 - https://doi.org/10.1016/j.jvoice.2023.01.012 SN - 0892-1997 PB - Elsevier BV ER - TY - CHAP A1 - Wagner, Dominik A1 - Bayerl, Sebastian P. A1 - Maruri, Hector A. Cordourier A1 - Bocklet, Tobias T1 - Generative Models for Improved Naturalness, Intelligibility, and Voicing of Whispered Speech T2 - 2022 IEEE Spoken Language Technology Workshop (SLT) N2 - This work adapts two recent architectures of generative models and evaluates their effectiveness for the conversion of whispered speech to normal speech. We incorporate the normal target speech into the training criterion of vector-quantized variational autoencoders (VQ-VAEs) and Mel-GANs, thereby conditioning the systems to recover voiced speech from whispered inputs. Objective and subjective quality measures indicate that both VQ-VAEs and MelGANs can be modified to perform the conversion task. We find that the proposed approaches significantly improve the Mel cepstral distortion (MCD) metric by at least 25% relative to a Disco-GAN baseline. Subjective listening tests suggest that the MelGAN-based system significantly improves naturalness, intelligibility, and voicing compared to the whispered input speech. A novel evaluation measure based on differences between latent speech representations also indicates that our MelGAN-based approach yields improvements relative to the baseline. KW - whispered speech KW - speech conversion KW - VAE KW - GAN KW - generative models Y1 - 2023 SN - 979-8-3503-9690-4 U6 - https://doi.org/10.1109/SLT54892.2023.10022796 SP - 943 EP - 948 PB - IEEE ER -