TY - CHAP A1 - Georges, Munir A1 - Huang, Jonathan A1 - Bocklet, Tobias T1 - Compact Speaker Embedding BT - lrx-vector N2 - Deep neural networks (DNN) have recently been widely used in speaker recognition systems, achieving state-of-the-art performance on various benchmarks. The x-vector architecture is especially popular in this research community, due to its excellent performance and manageable computational complexity. In this paper, we present the lrx-vector system, which is the low-rank factorized version of the x-vector embedding network. The primary objective of this topology is to further reduce the memory requirement of the speaker recognition system. We discuss the deployment of knowledge distillation for training the lrx-vector system and compare against low-rank factorization with SVD. On the VOiCES 2019 far-field corpus we were able to reduce the weights by 28% compared to the full-rank x-vector system while keeping the recognition rate constant (1.83% EER). KW - speaker recognition, x-vector, low power Y1 - 2020 U6 - https://doi.org/10.48550/arXiv.2008.05011 ER - TY - CHAP A1 - Bayerl, Sebastian P. A1 - Wagner, Dominik A1 - Nöth, Elmar A1 - Bocklet, Tobias A1 - Riedhammer, Korbinian A1 - Sojka, Petr A1 - Kopeček, Ivan A1 - Pala, Karel A1 - Horák, Aleš T1 - The Influence of Dataset Partitioning on Dysfluency Detection Systems N2 - This paper empirically investigates the influence of different data splits and splitting strategies on the performance of dysfluency detection systems. For this, we perform experiments using wav2vec 2.0 models with a classification head as well as support vector machines (SVM) in conjunction with the features extracted from the wav2vec 2.0 model to detect dysfluencies. We train and evaluate the systems with different non-speaker-exclusive and speaker-exclusive splits of the Stuttering Events in Podcasts (SEP-28k) dataset to shed some light on the variability of results w.r.t. to the partition method used. Furthermore, we show that the SEP-28k dataset is dominated by only a few speakers, making it difficult to evaluate. To remedy this problem, we created SEP-28k-Extended (SEP-28k-E), containing semi-automatically generated speaker and gender information for the SEP-28k corpus, and suggest different data splits, each useful for evaluating other aspects of methods for dysfluency detection. KW - stuttering · dysfluencies · pathological speech · SEP-28k Y1 - 2022 U6 - https://doi.org/10.48550/arXiv.2206.03400 PB - Springer International Publishing ER - TY - CHAP A1 - Lopatka, Kuba A1 - Bocklet, Tobias T1 - State Sequence Pooling Training of Acoustic Models for Keyword Spotting T2 - Proceedings Interspeech 2020 N2 - We propose a new training method to improve HMM-based keyword spotting. The loss function is based on a score computed with the keyword/filler model from the entire input sequence. It is equivalent to max/attention pooling but is based on prior acoustic knowledge. We also employ a multi-task learning setup by predicting both LVCSR and keyword posteriors. We compare our model to a baseline trained on frame-wise cross entropy, with and without per-class weighting. We employ a low-footprint TDNN for acoustic modeling. The proposed training yields significant and consistent improvement over the baseline in adverse noise conditions. The FRR on cafeteria noise is reduced from 13.07% to 5.28% at 9 dB SNR and from 37.44% to 6.78% at 5 dB SNR. We obtain these results with only 600 unique training keyword samples. The training method is independent of the frontend and acoustic model topology. KW - keyword spotting, machine learning, speech recognition Y1 - 2020 U6 - https://doi.org/10.21437/Interspeech.2020-2722 SN - 2958-1796 SP - 4338 EP - 4342 ER - TY - CHAP A1 - Simic, Christopher A1 - Bocklet, Tobias T1 - Self-Supervised Adaptive AV Fusion Module for Pre-Trained ASR Models T2 - ICASSP 2024 - 2024 IEEE International Y1 - 2024 U6 - https://doi.org/10.1109/ICASSP48485.2024.10448047 SP - 12787 EP - 12791 ER - TY - CHAP A1 - Seeberger, Philipp A1 - Bocklet, Tobias A1 - Riedhammer, Korbinian T1 - Information Type Classification with Contrastive Task-Specialized Sentence Encoders N2 - User-generated information content has become an important information source in crisis situations. However, classification models suffer from noise and event-related biases which still poses a challenging task and requires sophisticated task-adaptation. To address these challenges, we propose the use of contrastive task-specialized sentence encoders for downstream classification. We apply the task-specialization on the CrisisLex, HumAID, and TrecIS information type classification tasks and show performance gains w.r.t. F1-score. Furthermore, we analyse the cross-corpus and cross-lingual capabilities for two German event relevancy classification datasets. Y1 - 2023 U6 - https://doi.org/10.48550/arXiv.2312.11020 PB - Association for Computational Linguistics ER - TY - CHAP A1 - Baumann, Ilja A1 - Wagner, Dominik A1 - Schuster, Maria A1 - Nöth, Elmar A1 - Bocklet, Tobias T1 - Towards Interpretability of Automatic Phoneme Analysis in Cleft Lip and Palate Speech T2 - IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) N2 - Cleft Lip and Palate ranks among the most common congenital abnormalities and significantly influences speech articulation, resulting in varying phonemic impacts. In a clinical context, a detailed diagnosis is carried out by time-consuming perceptual evaluations. We use perceptual ratings of different articulatory modifications on phoneme-level as ground-truth and propose a system based on wav2vec 2.0, trained to the downstream task of classifying phonemic criteria as a multi-class and multi-label problem. The system is trained for detection on utterance level, without the usage of phoneme labels. To gain a clearer understanding of which areas of the speech signal have the greatest impact on classification, we assess the extent to which our system aligns with expert ratings at the phoneme level. Additionally, we examine which specific phonemes play a decisive role in determining the final classification of the labeled criteria. The results show that salient phonemes marked by experts contribute remarkably greater to the classification of the correct class using feature relevance explanation methods. To the best of our knowledge, this is the first study incorporating various utterance-level articulatory modifications classification and phoneme-level interpretation, offering a more comprehensive understanding for potential clinical applications. KW - pathologic speech, cleft lip and palate, children’s speech, automatic assessment Y1 - 2024 U6 - https://doi.org/10.1109/ICASSP48485.2024.10447632 SP - 12602 EP - 12606 ER - TY - CHAP A1 - Ranzenberger, Thomas A1 - Freier, Carolin A1 - Reinold, Luca A1 - Riedhammer, Korbinian A1 - Schneider, Fabian A1 - Simic, Christopher A1 - Simon, Claudia A1 - Freisinger, Steffen A1 - Georges, Munir A1 - Bocklet, Tobias T1 - A Multidisciplinary Approach to AI-based self-motivated Learning and Teaching with Large Language Models T2 - Proceedings of DELFI Workshops 2024 N2 - We present a learning experience platform that uses machine learning methods to support students and lecturers in self-motivated online learning and teaching processes. The platform is being developed as an agile open-source collaborative project supported by multiple universities and partners. The development is guided didactically, reviewed, and scientifically evaluated in several cycles. Transparency, data protection and the copyright compliant use of the system is a central part of the project. The system further employs large language models (LLMs). Due to privacy concerns, we utilize locally hosted LLM instances and explicitly do not rely on available cloud products. Students and lecturers can interact with an LLM-based chatbot in the current prototype. The AI-generated outputs contain cross-references to the current educational video’s context, indicating if sections are based on the lectures context or world knowledge. We present the prototype and results of our qualitative evaluation from the perspective of lecturers and students. KW - Artificial Intelligence in Education KW - Learning Experience Platform KW - Open Source Software KW - Large Language Models Y1 - 2024 U6 - https://doi.org/10.18420/delfi2024_11 PB - Gesellschaft für Informatik e.V. ER - TY - CHAP A1 - Braun, Franziska A1 - Bayerl, Sebastian A1 - Hönig, Florian A1 - Lehfeld, Hartmut A1 - Hillemacher, Thomas A1 - Bocklet, Tobias A1 - Riedhammer, Korbinian T1 - Infusing Acoustic Pause Context into Text-Based Dementia Assessment N2 - Speech pauses, alongside content and structure, offer a valuable and non-invasive biomarker for detecting dementia. This work investigates the use of pause-enriched transcripts in transformer-based language models to differentiate the cognitive states of subjects with no cognitive impairment, mild cognitive impairment, and Alzheimer’s dementia based on their speech from a clinical assessment. We address three binary classification tasks: Onset, monitoring, and dementia exclusion. The performance is evaluated through experiments on a German Verbal Fluency Test and a Picture Description Test, comparing the model’s effectiveness across different speech production contexts. Starting from a textual baseline, we investigate the effect of incorporation of pause information and acoustic context. We show the test should be chosen depending on the task, and similarly, lexical pause information and acoustic cross-attention contribute differently. KW - speech biomarkers KW - dementia assessment KW - neuropsychological tests KW - pathological speech Y1 - 2024 U6 - https://doi.org/10.21437/Interspeech.2024-2496 SN - 2958-1796 ER - TY - CHAP A1 - Wagner, Dominik A1 - Baumann, Ilja A1 - Ranzenberger, Thomas A1 - Riedhammer, Korbinian A1 - Bocklet, Tobias T1 - Personalizing Large Sequence-to-Sequence Speech Foundation Models With Speaker Representations N2 - We present a method to personalize large transformer-based encoderdecoder speech foundation models without the need for changes in the underlying model structure or training from scratch. This is achieved by projecting speaker-specific information into the latent space of the transformer decoder via a small neural network and learning to process the speaker information along with domainspecific information via parameter-efficient finetuning. We use this method to improve the automatic speech recognition results of spoken academic German and English. Our approach yields average relative word error rate (WER) improvements of approximately 29% on German academic speech and 25% on English academic speech. It also translates well to conversational speech, achieving relative WER improvements of up to 36%, and demonstrates modest gains of up to 5% on read speech. Moreover, we observe that incorporating utterances from the recent past as personalization context yields the most significant overall improvements and that changes in voice characteristics resulting from prolonged speaking have a minimal effect on the personalization quality of academic lectures. Y1 - 2024 U6 - https://doi.org/10.1109/SLT61566.2024.10832252 ER - TY - JOUR A1 - Wagner, Dominik A1 - Baumann, Ilja A1 - Bocklet, Tobias ED - Baumann, Ilja T1 - Generative adversarial networks for whispered to voiced speech conversion: a comparative study JF - International Journal of Speech Technology N2 - Generative Adversarial Networks (GANs) have demonstrated promising results as end-to-end models for whispered to voiced speech conversion. Leveraging non-autoregressive systems like GANs capable of performing conditional waveform generation eliminates the need for separate models to estimate voiced speech features, and leads to faster inference compared to autoregressive methods. This study aims to identify the optimal GAN architecture for the whispered to voiced speech conversion task by comparing six state-of-the-art models. Furthermore, we present a method for evaluating the preservation of speaker identity and local accent, using embeddings obtained from speaker- and language identification systems. Our experimental results show that building the speech conversion system based on the HiFi-GAN architecture yields the best objective evaluation scores, outperforming the baseline by ∼9% relative using frequency-weighted Signal-to-Noise Ratio and Log Likelihood Ratio, as well as by ∼29% relative using Root Mean Squared Error. In subjective tests, HiFi-GAN yielded a mean opinion score of 2.9, significantly outperforming the baseline with a score of 1.4. Furthermore, HiFi-GAN enhanced ASR performance and preserved speaker identity and accent, with correct language detection rates of up to ∼98%. KW - Speech conversion · Generative adversarial networks · Whispered speech · Voiced speech Y1 - 2024 U6 - https://doi.org/10.1007/s10772-024-10161-1 VL - 27 ER - TY - CHAP A1 - Wagner, Dominik A1 - Lee, Seanie A1 - Baumann, Ilja A1 - Seeberger, Philipp A1 - Riedhammer, Korbinian A1 - Bocklet, Tobias T1 - Optimized Speculative Sampling for {GPU} Hardware Accelerators T2 - Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing N2 - In this work, we optimize speculative sampling for parallel hardware accelerators to improve sampling speed. We notice that substantial portions of the intermediate matrices necessary for speculative sampling can be computed concurrently. This allows us to distribute the workload across multiple GPU threads, enabling simultaneous operations on matrix segments within thread blocks. This results in profiling time improvements ranging from 6% to 13% relative to the baseline implementation, without compromising accuracy. To further accelerate speculative sampling, probability distributions parameterized by softmax are approximated by sigmoid. This approximation approach results in significantly greater relative improvements in profiling time, ranging from 37% to 94%, with a minor decline in accuracy. We conduct extensive experiments on both automatic speech recognition and summarization tasks to validate the effectiveness of our optimization methods. Y1 - 2024 U6 - https://doi.org/10.18653/v1/2024.emnlp-main.370 PB - Association for Computational Linguistics CY - Miami, Florida, USA ER - TY - CHAP A1 - Ranzenberger, Thomas A1 - Bocklet, Tobias A1 - Freisinger, Steffen A1 - Georges, Munir A1 - Glockner, Kevin A1 - Herygers, Aaricia A1 - Riedhammer, Korbinian A1 - Schneider, Fabian A1 - Simic, Christopher A1 - Zakaria, Khabbab T1 - EXTENDING HANS: LARGE LANGUAGE MODELS FOR QUESTION ANSWERING, SUMMARIZATION, AND TOPIC SEGMENTATION IN AN ML-BASED LEARNING EXPERIENCE PLATFORM T2 - Elektronische Sprachsignalverarbeitung 2024, Tagungsband der 35. Konferenz, Regensburg, 6.-8. März 2024 N2 - Abstract: The use of chatbots based on large language models (LLMs) and their impact on society are influencing our learning experience platform Hochschul Assistenz-System (HAnS). HAnS uses machine learning (ML) methods to support students and lecturers in the online learning and teaching processes [1]. This paper introduces LLM-based features available in HAnS which are using the transcript of our improved Automatic Speech Recognition (ASR) pipeline with an average transcription duration of 45 seconds and an average word error rate (WER) of 6.66% on over 8 hours of audio data of 7 lecture videos. A LLM-based chatbot could be used to answer questions on the lecture content as the ASR transcript is provided as context. The summarization and topic segmentation uses the LLM to improve our learning experience platform. We generate multiple choice questions using the LLM and the ASR transcript as context during playback in a period of 3 minutes and display them in the HAnS frontend Y1 - 2024 SN - 978-3-95908-325-6 PB - TUPress CY - Dresden ER - TY - JOUR A1 - Pérez-Toro, Paula Andrea A1 - Vásquez-Correa, Juan Camilo A1 - Bocklet, Tobias A1 - Nöth, Elmar A1 - Orozco-Arroyave, Juan Rafael T1 - User State Modeling Based on the Arousal-Valence Plane: Applications in Customer Satisfaction and Health-Care JF - IEEE Transactions on Affective Computing N2 - The acoustic analysis helps to discriminate emotions according to non-verbal information, while linguistics aims to capture verbal information from written sources. Acoustic and linguistic analyses can be addressed for different applications, where information related to emotions, mood, or affect are involved. The Arousal-Valence plane is commonly used to model emotional states in a multidimensional space. This study proposes a methodology focused on modeling the user’s state based on the Arousal-Valence plane in different scenarios. Acoustic and linguistic information are used as input to feed different deep learning architectures mainly based on convolutional and recurrent neural networks, which are trained to model the Arousal-Valence plane. The proposed approach is used for the evaluation of customer satisfaction in call-centers and for health-care applications in the assessment of depression in Parkinson’s disease and the discrimination of Alzheimer’s disease. F-scores of up to 0.89 are obtained for customer satisfaction, of up to 0.82 for depression in Parkinson’s patients, and of up to 0.80 for Alzheimer’s patients. The proposed approach confirms that there is information embedded in the Arousal-Valence plane that can be used for different purposes. KW - Arousal-valence plane KW - acoustic KW - linguistic KW - customer satisfaction KW - alzheimer’s disease KW - depression Y1 - 2021 U6 - https://doi.org/10.1109/taffc.2021.3112543 SN - 1949-3045 VL - 14 IS - 2 SP - 1533 EP - 1546 PB - Institute of Electrical and Electronics Engineers (IEEE) ER - TY - CHAP A1 - Perez-Toro, P. A. A1 - Vasquez-Correa, J. C. A1 - Arias-Vergara, T. A1 - Klumpp, P. A1 - Sierra-Castrillon, M. A1 - Roldan-Lopez, M. E. A1 - Aguillon, D. A1 - Hincapie-Henao, L. A1 - Tobon-Quintero, C. A. A1 - Bocklet, Tobias A1 - Schuster, M. A1 - Orozco-Arroyave, J. R. A1 - Nöth, E. T1 - Acoustic and Linguistic Analyses to Assess Early-Onset and Genetic Alzheimer’s Disease T2 - ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) N2 - The PSEN1-E280A or Paisa mutation is responsible for most of Early-Onset Alzheimer’s (EOA) disease cases in Colombia. It affects a large kindred of over 5000 members that present the same phenotype. The most common symptoms are related to language disorders, where speech fluency is also affected due to the difficulty to access semantic information intentionally. This study proposes the use of acoustic and linguistic methods to extract features from speech recordings and their transcriptions to discriminate people with conditions related to the Paisa mutation. We consider state-of-the-art word-embedding methods like Word2Vec and Bidirectional Encoder Representations from Transformer to process the transcripts. The speech signals are modeled by using traditional acoustic features and speaker embeddings. To the best of our knowledge, this is the first study focused on evaluating genetic Alzheimer’s and EOA using acoustics and linguistics. KW - PSEN1–E280A KW - Alzheimer’s Disease KW - Acoustic Analysis KW - Linguistic Analysis Y1 - 2021 SN - 978-1-7281-7605-5 U6 - https://doi.org/10.1109/ICASSP39728.2021.9414009 SP - 8338 EP - 8342 PB - IEEE ER - TY - CHAP A1 - Scheuerer, Ralph A1 - Haderlein, Tino A1 - Nöth, Elmar A1 - Bocklet, Tobias T1 - Applying X-Vectors on Pathological Speech After Larynx Removal T2 - 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) N2 - Speaker embeddings extracted from time delayed neural networks (TDNNs) contributed to major recent advancements in speaker recognition and verification. We use an X-Vector system trained on augmented VoxCeleb1 and VoxCeleb2 data to obtain embeddings for pathological speech after total or partial larynx removal. We show that our model is able to effectively distinguish and visualize patient groups when generating embeddings. We further compare various regression models on the task of automatically predicting different perceptual ratings by speech therapists (intelligibility, vocal effort, and overall quality) based on the extracted speaker embeddings. For both patient groups we show Pearson correlations in the range of +0.8; we find that Random Forest and Support Vector Regression produce scores that best resemble the experts' assessments. KW - laryngectomy KW - intelligibility KW - pathological speech KW - x-vectors Y1 - 2021 SN - 978-1-6654-3739-4 U6 - https://doi.org/10.1109/asru51503.2021.9688278 VL - 2021 SP - 1079 EP - 1086 PB - IEEE ER - TY - CHAP A1 - Bundscherer, Maximilian A1 - Schmitt, Thomas H. A1 - Bayerl, Sebastian P. A1 - Auerbach, Thomas A1 - Bocklet, Tobias T1 - An Acoustical Machine Learning Approach to Determine Abrasive Belt Wear of Wide Belt Sanders T2 - 2022 IEEE Sensors N2 - This paper describes a machine learning approach to determine the abrasive belt wear of wide belt sanders used in industrial processes based on acoustic data, regardless of the sanding process-related parameters, Feed speed, Grit Size, and Type of material. Our approach utilizes Decision Tree, Random Forest, k-nearest Neighbors, and Neural network Classifiers to detect the belt wear from Spectrograms, Mel Spectrograms, MFCC, IMFCC, and LFCC, yielding an accuracy of up to 86.1% on five levels of belt wear. A 96% accuracy could be achieved with different Decision Tree Classifiers specialized in different sanding parameter configurations. The classifiers could also determine with an accuracy of 97% if the machine is currently sanding or is idle and with an accuracy of 98.4% and 98.8% detect the sanding parameters Feed speed and Grit Size. We can show that low-dimensional mappings of high-dimensional features can be used to visualize belt wear and sanding parameters meaningfully. KW - Acoustic sensors KW - Abrasive belt wear KW - Tool wear KW - Machine learning KW - Industrial process KW - Wide belt sanding machines Y1 - 2022 SN - 978-1-6654-8464-0 U6 - https://doi.org/10.1109/SENSORS52175.2022.9967324 VL - 2022 PB - IEEE ER - TY - CHAP A1 - Schmitt, Thomas H. A1 - Bundscherer, Maximilian A1 - Drechsel, Ralf A1 - Bocklet, Tobias T1 - Machine learning based optimization of a ceramic bushing manufacturing process T2 - 2022 IEEE Sensors N2 - Machine learning (ML) has shown great promise in a variety of domains in recent years. ML models are known to require large amounts of labeled training data, keeping small to medium-sized business from utilizing them. This paper presents ML based approach to optimize a ceramic bushing manufac-turing process, by predicting the employed press-fit process as a function of press punch position. Accurate predictions would ensure optimal process configuration, guaranteeing quality and reducing waste. Models are trained in a supervised manner to predict the press-fit process and the ceramic defect probabilities as functions of press punch position. We were able to predict the press-fit process with a mean correlation of 0.996 and assess whether the process would damage the ceramic with a mean precision of 96.7%. Our results exemplify how ML can be used to predict and optimize highly specialised processes even with small datasets. KW - Manufacturing KW - machine learning KW - optimization KW - ceramic bushing Y1 - 2022 SN - 978-1-6654-8464-0 U6 - https://doi.org/10.1109/sensors52175.2022.9967124 PB - IEEE ER - TY - CHAP A1 - Klumpp, P. A1 - Bocklet, Tobias A1 - Arias-Vergara, T. A1 - Vásquez-Correa, J. C. A1 - Pérez-Toro, P.A. A1 - Bayerl, Sebastian P. A1 - Orozco-Arroyave, J. R. A1 - Nöth, Elmar T1 - The Phonetic Footprint of Covid-19? T2 - Interspeech 2021 N2 - Against the background of the ongoing pandemic, this year’s Computational Paralinguistics Challenge featured a classification problem to detect Covid-19 from speech recordings. The presented approach is based on a phonetic analysis of speech samples, thus it enabled us not only to discriminate between Covid and non-Covid samples, but also to better understand how the condition influenced an individual’s speech signal. Our deep acoustic model was trained with datasets collected exclusively from healthy speakers. It served as a tool for segmentation and feature extraction on the samples from the challenge dataset. Distinct patterns were found in the embeddings of phonetic classes that have their place of articulation deep inside the vocal tract. We observed profound differences in classification results for development and test splits, similar to the baseline method. We concluded that, based on our phonetic findings, it was safe to assume that our classifier was able to reliably detect a pathological condition located in the respiratory tract. However, we found no evidence to claim that the system was able to discriminate between Covid-19 and other respiratory diseases. KW - COVID-19 Y1 - 2021 U6 - https://doi.org/10.21437/Interspeech.2021-1488 SN - 2958-1796 SP - 441 EP - 445 PB - ISCA CY - ISCA ER - TY - CHAP A1 - Baumann, Ilja A1 - Wagner, Dominik A1 - Bayerl, Sebastian P. A1 - Bocklet, Tobias T1 - Nonwords Pronunciation Classification in Language Development Tests for Preschool Children T2 - Interspeech 2022 N2 - This work aims to automatically evaluate whether the language development of children is age-appropriate. Validated speech and language tests are used for this purpose to test the auditory memory. In this work, the task is to determine whether spoken nonwords have been uttered correctly. We compare different approaches that are motivated to model specific language structures: Low-level features (FFT), speaker embeddings (ECAPA-TDNN), grapheme-motivated embeddings (wav2vec 2.0), and phonetic embeddings in form of senones (ASR acoustic model). Each of the approaches provides input for VGG-like 5-layer CNN classifiers. We also examine the adaptation per nonword. The evaluation of the proposed systems was performed using recordings from different kindergartens of spoken nonwords. ECAPA-TDNN and low-level FFT features do not explicitly model phonetic information; wav2vec2.0 is trained on grapheme labels, our ASR acoustic model features contain (sub-)phonetic information. We found that the more granular the phonetic modeling is, the higher are the achieved recognition rates. The best system trained on ASR acoustic model features with VTLN achieved an accuracy of 89.4% and an area under the ROC (Receiver Operating Characteristic) curve (AUC) of 0.923. This corresponds to an improvement in accuracy of 20.2% and AUC of 0.309 relative compared to the FFT-baseline. Y1 - 2022 U6 - https://doi.org/10.21437/interspeech.2022-10777 SN - 2958-1796 VL - 2022 SP - 3643 EP - 3647 PB - ISCA ER - TY - CHAP A1 - Chen, Wenda A1 - Huang, Jonathan A1 - Bocklet, Tobias T1 - Length- and Noise-Aware Training Techniques for Short-Utterance Speaker Recognition T2 - Interspeech 2020 N2 - Speaker recognition performance has been greatly improved with the emergence of deep learning. Deep neural networks show the capacity to effectively deal with impacts of noise and reverberation, making them attractive to far-field speaker recognition systems. The x-vector framework is a popular choice for generating speaker embeddings in recent literature due to its robust training mechanism and excellent performance in various test sets. In this paper, we start with early work on including invariant representation learning (IRL) to the loss function and modify the approach with centroid alignment (CA) and length variability cost (LVC) techniques to further improve robustness in noisy, far-field applications. This work mainly focuses on improvements for short-duration test utterances (1-8s). We also present improved results on long-duration tasks. In addition, this work discusses a novel self-attention mechanism. On the VOiCES far-field corpus, the combination of the proposed techniques achieves relative improvements of 7.0% for extremely short and 8.2% for full-duration test utterances on equal error rate (EER) over our baseline system. KW - speaker recognition KW - invariant representation learning KW - centroid alignment KW - x-vector KW - far-field Y1 - 2020 U6 - https://doi.org/10.21437/interspeech.2020-2872 SN - 2958-1796 SP - 3835 EP - 3839 PB - ISCA CY - ISCA ER -