Optimized Speculative Sampling for {GPU} Hardware Accelerators

  • In this work, we optimize speculative sampling for parallel hardware accelerators to improve sampling speed. We notice that substantial portions of the intermediate matrices necessary for speculative sampling can be computed concurrently. This allows us to distribute the workload across multiple GPU threads, enabling simultaneous operations on matrix segments within thread blocks. This results in profiling time improvements ranging from 6% to 13% relative to the baseline implementation, without compromising accuracy. To further accelerate speculative sampling, probability distributions parameterized by softmax are approximated by sigmoid. This approximation approach results in significantly greater relative improvements in profiling time, ranging from 37% to 94%, with a minor decline in accuracy. We conduct extensive experiments on both automatic speech recognition and summarization tasks to validate the effectiveness of our optimization methods.

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Dominik Wagner, Seanie Lee, Ilja BaumannORCiD, Philipp Seeberger, Korbinian RiedhammerORCiD, Tobias BockletORCiD
DOI:https://doi.org/10.18653/v1/2024.emnlp-main.370
Parent Title (English):Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Publisher:Association for Computational Linguistics
Place of publication:Miami, Florida, USA
Document Type:conference proceeding (article)
Language:English
Date of first Publication:2024/11/30
Reviewed:Begutachtet/Reviewed
Release Date:2025/08/04
Pagenumber:6442-6458
Konferenzangabe:EMNLP 2024
institutes:Fakultät Informatik
Zentrum für Künstliche Intelligenz (KIZ)
Research Themes:Digitalisierung & Künstliche Intelligenz
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.