Outlier Reduction with Gated Attention for Improved Post-training Quantization in Large Sequence-to-sequence Speech Foundation Models

  • This paper explores the improvement of post-training quantization (PTQ) after knowledge distillation in the Whisper speech foundation model family. We address the challenge of outliers in weights and activation tensors, known to impede quantization quality in transformer-based language and vision models. Extending this observation to Whisper, we demonstrate that these outliers are also present when transformer-based models are trained to perform automatic speech recognition, necessitating mitigation strategies for PTQ. We show that outliers can be reduced by a recently proposed gating mechanism in the attention blocks of the student model, enabling effective 8-bit quantization, and lower word error rates compared to student models without the gating mechanism in place.

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Dominik Wagner, Ilja BaumannORCiD, Korbinian RiedhammerORCiD, Tobias BockletORCiD
DOI:https://doi.org/10.21437/Interspeech.2024-2105
Editor:Krisztina Pesak
Document Type:conference proceeding (article)
Language:English
Date of first Publication:2024/06/30
Reviewed:Begutachtet/Reviewed
Release Date:2024/10/17
Tag:post-training quantization, Whisper, gated attention, outliers
Pagenumber:5
First Page:4623
Last Page:4627
Konferenzangabe:Interspeech 2024, 1 - 5 September, Kos, Greece
institutes:Zentrum für Künstliche Intelligenz (KIZ)
Research Themes:Digitalisierung & Künstliche Intelligenz
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.