Refine
Document Type
- conference proceeding (article) (12)
- Article (4)
- Part of a Book (1)
Has Fulltext
- no (17)
Reviewed
- Begutachtet/Reviewed (16)
Keywords
- pathologic speech, cleft lip and palate, children’s speech, automatic assessment (2)
- Arousal-valence plane (1)
- COVID-19 (1)
- Computer Science (1)
- Conferences ; Pipelines ; Phonetics ; Audio recording ; Speech processing ; Synthetic data ; Automatic speech recognition ; children’s speech ; vowel errors ; nonwords (1)
- Dysfluency Stuttering ComParE challenge Paralinguistics Pathological speech (1)
- Informatics (1)
- Künstliche Intelligenz - Machine Learning - pathologische Sprache - Spracherkennung - Sprachverarbeitung (1)
- acoustic (1)
- alzheimer’s disease (1)
Institute
Sprache kann eine Vielzahl von diagnostisch relevanten Informationen enthalten. In diesem Übersichtsartikel wird aufgezeigt, wie Methoden der Künstlichen Intelligenz, insbesondere Maschinelles Lernen und Sprachverarbeitung, angewendet auf Sprachsignale eingesetzt werden können: zur Bewertung von Verständlichkeit, zur Automatisierung von standardisierten Tests und zur Bestimmung medizinischer Skalen und Diagnosen. Eine abschließende kritischen Betrachtung von akustischen Merkmalen über eine Vielzahl von Pathologien gibt Grund zur Annahme, dass diese Marker tatsächlich diagnostisch relevante Informationen enthalten.
Automated dementia screening enables early detection and intervention, reducing costs to healthcare systems and increasing quality of life for those affected. Depression has shared symptoms with dementia, adding complexity to diagnoses. The research focus so far has been on binary classification of dementia (DEM) and healthy controls (HC) using speech from picture description tests from a single dataset. In this work, we apply established baseline systems to discriminate cognitive impairment in speech from the semantic Verbal Fluency Test and the Boston Naming Test using text, audio and emotion embeddings in a 3-class classification problem (HC vs. MCI vs. DEM). We perform cross-corpus and mixed-corpus experiments on two independently recorded German datasets to investigate generalization to larger populations and different recording conditions. In a detailed error analysis, we look at depression as a secondary diagnosis to understand what our classifiers actually learn.
Alzheimer’s Disease (AD) results from the progressive loss of neurons in the hippocampus, which affects the capability to produce coherent language. It affects lexical, grammatical, and semantic processes as well as speech fluency. This paper considers the analyses of speech and language for the assessment of AD in the context of the Alzheimer’s Dementia Recognition through Spontaneous Speech (ADReSSo) 2021 challenge. We propose to extract acoustic features such as X-vectors, prosody, and emotional embeddings as well as linguistic features such as perplexity, and word-embeddings. The data consist of speech recordings from AD patients and healthy controls. The transcriptions are obtained using a commercial automatic speech recognition system. We outperform baseline results on the test set, both for the classification and the Mini-Mental State Examination (MMSE) prediction. We achieved a classification accuracy of 80% and an RMSE of 4.56 in the regression. Additionally, we found strong evidence for the influence of the interviewer on classification results. In cross-validation on the training set, we get classification results of 85% accuracy using the combined speech of the interviewer and the participant. Using interviewer speech only we still get an accuracy of 78%. Thus, we provide strong evidence for interviewer influence on classification results.
The detection of pathologies from speech features is usually defined as a binary classification task with one class representing a specific pathology and the other class representing healthy speech. In this work, we train neural networks, large margin classifiers, and tree boosting machines to distinguish between four pathologies: Parkinson's disease, laryngeal cancer, cleft lip and palate, and oral squamous cell carcinoma. We show that latent representations extracted at different layers of a pre-trained wav2vec 2.0 system can be effectively used to classify these types of pathological voices. We evaluate the robustness of our classifiers by adding room impulse responses to the test data and by applying them to unseen speech corpora. Our approach achieves unweighted average F1-Scores between 74.1% and 97.0%, depending on the model and the noise conditions used. The systems generalize and perform well on unseen data of healthy speakers sampled from a variety of different sources.
The ACM Multimedia 2022 Computational Paralinguistics Challenge (ComParE) featured a sub-challenge on the classification of stuttering in order to bring attention to this important topic and engage a wider research community. Stuttering is a complex speech disorder characterized by blocks, prolongations of sounds and syllables, and repetitions of sounds and words. Accurately classifying the symptoms of stuttering has implications for the development of self-help tools and specialized automatic speech recognition systems (ASR) that can handle atypical speech patterns. This paper provides a review of the challenge contributions and improves upon them with new state-of-the-art classification results for the KSF-C dataset, and explores cross-language training to demonstrate the potential of datasets in multiple languages. To facilitate further research and reproducibility, the full KSF-C dataset, including test-set labels, is also released.
The analysis of phonological processes is crucial in evaluating speech development disorders in children, but encounters challenges due to limited children audio data. This work focuses on automatic vowel error detection using a two-stage pipeline. The first stage uses a fine-tuned cross-lingual phone recognizer (wav2vec 2.0) to extract phone sequences from audio. The second stage employs a language model (BERT) for classification from a phone sequence, entirely trained on synthetic transcripts, to counteract the very broad range of potential mistakes. We evaluate the system on nonword audio recordings recited by preschool children from a speech development test. The results show that the classifier trained on synthetic data performs well, but its efficacy relies on the quality of the phone recognizer. The best classifier achieves an 94.7% F1 score when evaluated against phonetic ground truths, whereas the F1 score is 76.2% when using automatically recognized phone sequences.
This paper empirically investigates the influence of different data splits and splitting strategies on the performance of dysfluency detection systems. For this, we perform experiments using wav2vec 2.0 models with a classification head as well as support vector machines (SVM) in conjunction with the features extracted from the wav2vec 2.0 model to detect dysfluencies. We train and evaluate the systems with different non-speaker-exclusive and speaker-exclusive splits of the Stuttering Events in Podcasts (SEP-28k) dataset to shed some light on the variability of results w.r.t. to the partition method used. Furthermore, we show that the SEP-28k dataset is dominated by only a few speakers, making it difficult to evaluate. To remedy this problem, we created SEP-28k-Extended (SEP-28k-E), containing semi-automatically generated speaker and gender information for the SEP-28k corpus, and suggest different data splits, each useful for evaluating other aspects of methods for dysfluency detection.
Cleft Lip and Palate ranks among the most common congenital abnormalities and significantly influences speech articulation, resulting in varying phonemic impacts. In a clinical context, a detailed diagnosis is carried out by time-consuming perceptual evaluations. We use perceptual ratings of different articulatory modifications on phoneme-level as ground-truth and propose a system based on wav2vec 2.0, trained to the downstream task of classifying phonemic criteria as a multi-class and multi-label problem. The system is trained for detection on utterance level, without the usage of phoneme labels. To gain a clearer understanding of which areas of the speech signal have the greatest impact on classification, we assess the extent to which our system aligns with expert ratings at the phoneme level. Additionally, we examine which specific phonemes play a decisive role in determining the final classification of the labeled criteria. The results show that salient phonemes marked by experts contribute remarkably greater to the classification of the correct class using feature relevance explanation methods. To the best of our knowledge, this is the first study incorporating various utterance-level articulatory modifications classification and phoneme-level interpretation, offering a more comprehensive understanding for potential clinical applications.
Stuttering is a complex speech disorder that negatively affects an individual's ability to communicate effectively. Persons who stutter (PWS) often suffer considerably under the condition and seek help through therapy. Fluency shaping is a therapy approach where PWSs learn to modify their speech to help them to overcome their stutter. Mastering such speech techniques takes time and practice, even after therapy. Shortly after therapy, success is evaluated highly, but relapse rates are high. To be able to monitor speech behavior over a long time, the ability to detect stuttering events and modifications in speech could help PWSs and speech pathologists to track the level of fluency. Monitoring could create the ability to intervene early by detecting lapses in fluency. To the best of our knowledge, no public dataset is available that contains speech from people who underwent stuttering therapy that changed the style of speaking. This work introduces the Kassel State of Fluency (KSoF), a therapy-based dataset containing over 5500 clips of PWSs. The clips were labeled with six stuttering-related event types: blocks, prolongations, sound repetitions, word repetitions, interjections, and - specific to therapy - speech modifications. The audio was recorded during therapy sessions at the Institut der Kasseler Stottertherapie. The data will be made available for research purposes upon request.
The acoustic analysis helps to discriminate emotions according to non-verbal information, while linguistics aims to capture verbal information from written sources. Acoustic and linguistic analyses can be addressed for different applications, where information related to emotions, mood, or affect are involved. The Arousal-Valence plane is commonly used to model emotional states in a multidimensional space. This study proposes a methodology focused on modeling the user’s state based on the Arousal-Valence plane in different scenarios. Acoustic and linguistic information are used as input to feed different deep learning architectures mainly based on convolutional and recurrent neural networks, which are trained to model the Arousal-Valence plane. The proposed approach is used for the evaluation of customer satisfaction in call-centers and for health-care applications in the assessment of depression in Parkinson’s disease and the discrimination of Alzheimer’s disease. F-scores of up to 0.89 are obtained for customer satisfaction, of up to 0.82 for depression in Parkinson’s patients, and of up to 0.80 for Alzheimer’s patients. The proposed approach confirms that there is information embedded in the Arousal-Valence plane that can be used for different purposes.