Zentrum für Künstliche Intelligenz (KIZ)
Refine
Document Type
- conference proceeding (article) (43)
- Article (13)
- Report (4)
- Part of a Book (3)
- Book (2)
Reviewed
- Begutachtet/Reviewed (52)
Keywords
Institute
Automatic speech recognition (ASR) for pathologic speech remains a major challenge due to high variability in articulation, phonation, and prosody distortions. In this work, we propose a pathology-aware speech encoder based on BEST-RQ pre-training, which incorporates 46k hours of speech, including
pathologic and atypical speech. We continue pre-training for domain adaptation and experiment with etiology-specific codebooks. We achieve a 13.2% relative word error rate (WER) improvement using the pathology-aware speech encoder with etiology-specific continued pre-training. Additionally, we examine the impact of incorporating synthetic and out-of-domain (OOD) data to further enhance ASR performance. Synthetic data reduces WER by up to 8.7%, while OOD data improves WER by 12.2%. Finally, we introduce a semantic similaritybased data augmentation technique to optimize data selection, achieving a WER improvement of up to 9.7% while minimizing
the need for additional training data.
We analyze different errors in speech recognition systems, focusing on consecutive insertions and deletions, known as hallucinations and elisions in
transformer-based end-to-end automatic speech recognition (ASR) systems. We compare errors from a TDNN-HMM, and whisper-based models on English and German spontaneous speech. Based on a human annotated subset of German lecture videos, we investigate whether these blocks of deletions affect the semantics of the utterance. Whisper performs best and preserves the meaning in 90% of the annotated error segments even containing consecutive deletions on this subset. We analyze the word error rate and do further analysis of errors using natural language processing to detect lemmatization errors, compound word errors, and out-of-vocabulary words. We discuss possible reasons and mitigations.
Self-supervised learning has been successfully used for various speech related tasks, including automatic speech recognition. BERT-based Speech pre-Training with Random-projection Quantizer (BEST-RQ) has achieved state-of-the-art results in speech recognition. In this work, we further optimize the BEST-RQ approach using Kullback-Leibler divergence as an additional regularizing loss and multicodebook extension per cluster derived from low-level feature clustering. Preliminary experiments on train-100 split of LibriSpeech result in a relative improvement of 11.2% on test-clean by using multiple codebooks, utilizing a combination of cross-entropy and Kullback-Leibler divergence further reduces the word error rate by 4.5%. The proposed optimizations on full LibriSpeech pre-training and fine-tuning result in relative word error rate improvements of up to 23.8% on test-clean and 30.6% on testother using 6 codebooks. Furthermore, the proposed setup leads to faster
convergence in pre-training and fine-tuning and additionally stabilizes the
pre-training.
Towards Multi-Level Transcript Segmentation: LoRA Fine-Tuning for Table-of-Contents Generation
(2025)
Segmenting speech transcripts into thematic sections benefits both downstream processing and users who depend on written text for accessibility. We introduce a novel approach to hierarchical topic segmentation in transcripts, generating multi-level tables of contents that capture both topic and subtopic boundaries. We compare zero-shot prompting and LoRA fine-tuning on large language models, while also exploring the integration of high-level speech pause features. Evaluations on English meeting recordings and multilingual lecture transcripts (Portuguese,
German) show significant improvements over established topic segmentation baselines. Additionally, we adapt a common evaluation measure for multi-level segmentation, taking into account all hierarchical levels within one metric.
Cycle-consistent generative adversarial networks have been widely used in non-parallel voice conversion (VC). Their ability to learn mappings between source and target features without relying on parallel training data eliminates the need for temporal alignments. However, most methods decouple the conversion of acoustic features from synthesizing the audio signal by using separate models for conversion and waveform synthesis. This work unifies conversion and synthesis into a single model, thereby eliminating the need for a separate vocoder. By leveraging cycle-consistent training and a self-supervised auxiliary training task, our model is able to efficiently generate converted high-quality raw audio waveforms. Subjective listening tests showed that our unified approach achieved improvements of up to 6.7% relative to the baseline in whispered VC. Mean opinion score predictions also yielded stable results in conventional VC (between 0.5% and 2.4% relative improvement).
In this work, we present our submission to the Speech Accessibility Project challenge for dysarthric speech recognition. We integrate parameter-efficient fine-tuning with latent audio representations to improve an encoder-decoder ASR system. Synthetic training data is generated by fine-tuning Parler-TTS
to mimic dysarthric speech, using LLM-generated prompts for corpus-consistent target transcripts. Personalization with x-vectors consistently reduces word error rates (WERs) over non-personalized fine-tuning. AdaLoRA adapters outperform full fine-tuning and standard low-rank adaptation, achieving relative WER reductions of ∼23% and ∼22%, respectively. Further improvements (∼5% WER reduction) come from incorporating wav2vec 2.0-based audio representations. Training with synthetic dysarthric speech yields up to ∼7% relative WER improvement over personalized fine-tuning alone.
Safety guard models that detect malicious queries aimed at large language models(LLMs) are essential for ensuring the secure and responsible deployment of LLMs in real-world applications. However, deploying existing safety guard models with billions of parameters alongside LLMs on mobile devices is impractical due to substantial memory requirements and latency. To reduce this cost, we distill a large teacher safety guard model into a smaller one using a labeled dataset of instruction-response pairs with binary harmfulness labels. Due to the limited diversity of harmful instructions in the existing labeled dataset, naively distilled models tend to underperform compared to larger models. To bridge the gap between small and large models, we propose HarmAug, a simple yet effective data augmentation method that involves jailbreaking an LLM and prompting it to generate harmful instructions. Given a prompt such as, “Make a single harmful instruction prompt that would elicit offensive content”, we add an affirmative prefix (e.g., “I have an idea for a prompt:”) to the LLM’s response. This encourages the LLM to continue generating the rest of the response, leading to sampling harmful instructions. Another LLM generates a response to the harmful instruction, and the teacher model labels the instruction-response pair. We empirically show that our HarmAug outperforms other relevant baselines. Moreover, a 435-millionparameter safety guard model trained with HarmAug achieves an F1 score comparable to larger models with over 7 billion parameters, and even outperforms them in AUPRC, while operating at less than 25% of their computational cost. Our code, safety guard model, and synthetic dataset are publicly available.
In this work, we present and evaluate SELMA, a Speech-Enabled Language Model for virtual Assistant interactions that integrates audio and text as inputs to a Large Language Model (LLM). SELMA is designed to handle three primary and two auxiliary tasks related to interactions with virtual assistants simultaneously within a single end-to-end model. We employ low-rank adaptation modules for parameter-efficient training of both the audio encoder and the LLM. Additionally, we implement a feature pooling strategy enabling the system to recognize global patterns and improve accuracy on tasks less
reliant on individual sequence elements. Experimental results on Voice
Trigger (VT) detection, Device-Directed Speech Detection (DDSD), and
Automatic Speech Recognition (ASR), demonstrate that our approach both simplifies the typical input processing pipeline of virtual assistants significantly and also improves performance compared to dedicated models for each individual task. SELMA yields relative Equal-Error Rate improvements of 64% on the VT detection task, and 22% on DDSD, while also achieving word error rates close to the baseline.
This study presents an ML approach for classifying digital radio operating modes evaluated on real-world transmissions. We generated 98 different parameterized radio signals from 17 digital operating modes, transmitted each of them on the 70 cm (UHF) amateur radio band, and recorded our transmissions with two different architectures of SDR receivers. Three lightweight ML models were trained exclusively on spectrograms of limited non-transmitted signals with random characters as payloads. This training involved an online data augmentation pipeline to simulate various radio channel impairments. Our best model, EfficientNetB0, achieved an accuracy of 93.80% across the 17 operating modes and 85.47% across all 98 parameterized radio signals, evaluated on our real-world transmissions with
Wikipedia articles as payloads. Furthermore, we analyzed the impact of varying signal durations & the number of FFT bins on classification, assessed the effectiveness of our simulated channel impairments, and tested our models across multiple simulated SNRs.