Refine
Document Type
- conference proceeding (article) (18)
- Article (5)
- Report (1)
Language
- English (24)
Reviewed
- Begutachtet/Reviewed (21)
Keywords
- pathologic speech, cleft lip and palate, children’s speech, automatic assessment (2)
- self-supervised learning (2)
- speech recognition (2)
- Amateur (1)
- Automatic Modulation Classification (1)
- BEST-RQ (1)
- Cognitive Radio (1)
- Conferences ; Pipelines ; Phonetics ; Audio recording ; Speech processing ; Synthetic data ; Automatic speech recognition ; children’s speech ; vowel errors ; nonwords (1)
- Detection (1)
- Human-Machine Improvisation, Co-creativity, Player Piano (1)
Institute
In recent years, speech processing for medical applications got significant traction. While pioneering work in the 1990ies focused on processing sustained vowels or isolated utterances, work in the 2000s already showed, that speech recognition systems, prosodic analysis and natural language processing be used to assess a large variety of speech pathologies.Here, we give an overview of how to classify selected speech pathologies including stuttering, language development, speech intelligibility after surgery, dementia and Alzheimers, depression and state-of-mind. While each of those poses a rather well-defined problem in a lab setting, we discuss the issues when integrating such methods in a clinical workflow such as diagnosis or monitoring. Starting from the question if such detectors can be used for general screening or rather as a specialist's tool, we explore the legal and privacy-related implications: patient-doctor conversations, working with children or demented seniors, bias towards examiner or patient, on-device vs. cloud processing.We conclude with a set of open questions that should be addressed to help bringing all this research from the lab to routine clinical use.
The detection of pathologies from speech features is usually defined as a binary classification task with one class representing a specific pathology and the other class representing healthy speech. In this work, we train neural networks, large margin classifiers, and tree boosting machines to distinguish between four pathologies: Parkinson's disease, laryngeal cancer, cleft lip and palate, and oral squamous cell carcinoma. We show that latent representations extracted at different layers of a pre-trained wav2vec 2.0 system can be effectively used to classify these types of pathological voices. We evaluate the robustness of our classifiers by adding room impulse responses to the test data and by applying them to unseen speech corpora. Our approach achieves unweighted average F1-Scores between 74.1% and 97.0%, depending on the model and the noise conditions used. The systems generalize and perform well on unseen data of healthy speakers sampled from a variety of different sources.
We analyze the impact of speaker adaptation in end-to-end automatic speech recognition models based on transformers and wav2vec 2.0 under different noise conditions. By including speaker embeddings obtained from x-vector and ECAPA-TDNN systems, as well as i-vectors, we achieve relative word error rate improvements of up to 16.3% on LibriSpeech and up to 14.5% on Switchboard. We show that the proven method of concatenating speaker vectors to the acoustic features and supplying them as auxiliary model inputs remains a viable option to increase the robustness of end-to-end architectures. The effect on transformer models is stronger, when more noise is added to the input speech. The most substantial benefits for systems based on wav2vec 2.0 are achieved under moderate or no noise conditions. Both x-vectors and ECAPA-TDNN embeddings outperform i-vectors as speaker representations. The optimal embedding size depends on the dataset and also varies with the noise condition.
The analysis of phonological processes is crucial in evaluating speech development disorders in children, but encounters challenges due to limited children audio data. This work focuses on automatic vowel error detection using a two-stage pipeline. The first stage uses a fine-tuned cross-lingual phone recognizer (wav2vec 2.0) to extract phone sequences from audio. The second stage employs a language model (BERT) for classification from a phone sequence, entirely trained on synthetic transcripts, to counteract the very broad range of potential mistakes. We evaluate the system on nonword audio recordings recited by preschool children from a speech development test. The results show that the classifier trained on synthetic data performs well, but its efficacy relies on the quality of the phone recognizer. The best classifier achieves an 94.7% F1 score when evaluated against phonetic ground truths, whereas the F1 score is 76.2% when using automatically recognized phone sequences.
This work aims to automatically evaluate whether the language development of children is age-appropriate. Validated speech and language tests are used for this purpose to test the auditory memory. In this work, the task is to determine whether spoken nonwords have been uttered correctly. We compare different approaches that are motivated to model specific language structures: Low-level features (FFT), speaker embeddings (ECAPA-TDNN), grapheme-motivated embeddings (wav2vec 2.0), and phonetic embeddings in form of senones (ASR acoustic model). Each of the approaches provides input for VGG-like 5-layer CNN classifiers. We also examine the adaptation per non-word. The evaluation of the proposed systems was performed using recordings from different kindergartens of spoken non- words. ECAPA-TDNN and low-level FFT features do not explicitly model phonetic information; wav2vec2.0 is trained on grapheme labels, our ASR acoustic model features contain (sub-)phonetic information. We found that the more granular the phonetic modeling is, the higher are the achieved recognition rates. The best system trained on ASR acoustic model features with VTLN achieved an accuracy of 89.4% and an area under the ROC (Receiver Operating Characteristic) curve (AUC) of 0.923. This corresponds to an improvement in accuracy of 20.2% and AUC of 0.309 relative compared to the FFT-baseline
In recent years, machine learning, and in particular generative adversarial neural networks (GANs) and attention-based neural networks (transformers), have been successfully used to compose and generate music, both melodies and polyphonic pieces. Current research focuses foremost on style replication (e.g., generating a Bach-style chorale) or
style transfer (e.g., classical to jazz) based on large amounts of recorded or transcribed music, which in turn also allows for fairly straight-forward “performance” evaluation. However, most of these models are not suitable for human-machine co-creation through live interaction, neither is clear, how such models and resulting creations would be evaluated.
This article presents a thorough review of music representation, feature analysis, heuristic algorithms, statistical and parametric modelling, and human and automatic evaluation measures, along with a discussion of which approaches and models seem most suitable for live interaction.
Cleft Lip and Palate ranks among the most common congenital abnormalities and significantly influences speech articulation, resulting in varying phonemic impacts. In a clinical context, a detailed diagnosis is carried out by time-consuming perceptual evaluations. We use perceptual ratings of different articulatory modifications on phoneme-level as ground-truth and propose a system based on wav2vec 2.0, trained to the downstream task of classifying phonemic criteria as a multi-class and multi-label problem. The system is trained for detection on utterance level, without the usage of phoneme labels. To gain a clearer understanding of which areas of the speech signal have the greatest impact on classification, we assess the extent to which our system aligns with expert ratings at the phoneme level. Additionally, we examine which specific phonemes play a decisive role in determining the final classification of the labeled criteria. The results show that salient phonemes marked by experts contribute remarkably greater to the classification of the correct class using feature relevance explanation methods. To the best of our knowledge, this is the first study incorporating various utterance-level articulatory modifications classification and phoneme-level interpretation, offering a more comprehensive understanding for potential clinical applications.
Spirio Sessions
(2021)
This paper presents an ongoing interdisciplinary research project that deals with free improvisation and human-machine interaction, involving a digital player piano and other musical instruments. Various technical concepts are developed by student participants in the project and continuously evaluated in artistic performances. Our goal is to explore methods for co-creative collaborations with artificial intelligences embodied in the player piano, enabling it to act as an equal improvisation partner for human musicians.
Personalizing Large Sequence-to-Sequence Speech Foundation Models With Speaker Representations
(2024)
We present a method to personalize large transformer-based encoderdecoder
speech foundation models without the need for changes in the underlying model structure or training from scratch. This is achieved by projecting speaker-specific information into the latent space of the transformer decoder via a small neural network and learning to process the speaker information along with domainspecific information via parameter-efficient finetuning. We use this method to improve the automatic speech recognition results of spoken academic German and English. Our approach yields average relative word error rate (WER) improvements of approximately 29% on German academic speech and 25% on English academic speech. It also translates well to conversational speech, achieving relative WER improvements of up to 36%, and demonstrates modest gains of up to 5% on read speech. Moreover, we observe that incorporating utterances from the recent past as personalization context yields the most significant overall improvements and that changes in voice characteristics resulting from prolonged speaking have a minimal
effect on the personalization quality of academic lectures.
Generative adversarial networks for whispered to voiced speech conversion: a comparative study
(2024)
Generative Adversarial Networks (GANs) have demonstrated promising results as end-to-end models for whispered to voiced speech conversion. Leveraging non-autoregressive systems like GANs capable of performing conditional waveform generation eliminates the need for separate models to estimate voiced speech features, and leads to faster inference compared to autoregressive methods. This study aims to identify the optimal GAN architecture for the whispered to voiced speech conversion task by comparing
six state-of-the-art models. Furthermore, we present a method for evaluating the preservation of speaker identity and local accent, using embeddings obtained from speaker- and language identification systems. Our experimental results show that building the speech conversion system based on the HiFi-GAN architecture yields the best objective evaluation scores, outperforming the baseline by ∼9% relative using frequency-weighted Signal-to-Noise Ratio and Log Likelihood Ratio, as well as by ∼29% relative using Root Mean Squared Error. In subjective tests, HiFi-GAN yielded a mean opinion score of 2.9, significantly outperforming the baseline with a score of 1.4. Furthermore, HiFi-GAN enhanced ASR performance and preserved speaker identity and accent, with correct language detection rates of
up to ∼98%.