A Multimodal Approach to Device-Directed Speech Detection with Large Language Models

  • Interactions with virtual assistants typically start with a predefined trigger phrase followed by the user command. To make interactions with the assistant more intuitive, we explore whether it is feasible to drop the requirement that users must begin each command with a trigger phrase. We explore this task in three ways: First, we train classifiers using only acoustic information obtained from the audio waveform. Second, we take the decoder outputs of an automatic speech recognition (ASR) system, such as 1-best hypotheses, as input features to a large language model (LLM). Finally, we explore a multimodal system that combines acoustic and lexical features, as well as ASR decoder signals in an LLM. Using multimodal information yields relative equal-error-rate improvements over text-only and audio-only models of up to 39% and 61%. Increasing the size of the LLM and training with low-rank adaption leads to further relative EER reductions of up to 18% on our dataset.

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Dominik Wagner, Alexander Churchill, Siddharth Sigtia, Panayiotis Georgiou, Matt Mirsamadi, Aarshee Mishra, Erik Marchi
DOI:https://doi.org/10.1109/ICASSP48485.2024.10446224
Publisher:IEEE
Document Type:conference proceeding (article)
Language:English
Reviewed:Begutachtet/Reviewed
Release Date:2024/10/10
Tag:device-directed speech detection, large language model, multimodal, conditional generation
Pagenumber:5
First Page:10451
Last Page:10455
Konferenzangabe:ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
institutes:Zentrum für Künstliche Intelligenz (KIZ)
Research Themes:Digitalisierung & Künstliche Intelligenz
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.