Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)
  • Treffer 1 von 1
Zurück zur Trefferliste

Multi-threaded Kernel Offloading to GPGPU Using Hyper-Q on Kepler Architecture

Zitieren Sie bitte immer diese URN: urn:nbn:de:0297-zib-50362
  • Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution. We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each. By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture.

Volltext Dateien herunterladen

Metadaten exportieren

Metadaten
Verfasserangaben:Florian WendeORCiD, Thomas Steinke, Frank Cordes
Dokumentart:ZIB-Report
Freies Schlagwort / Tag:Concurrent Kernel Execution; GPGPU; Hyper-Q
MSC-Klassifikation:00-XX GENERAL
CCS-Klassifikation:B. Hardware
PACS-Klassifikation:80.00.00 INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY
Datum der Erstveröffentlichung:06.02.2014
Schriftenreihe (Bandnummer):ZIB-Report (14-19)
ISSN:1438-0064
Lizenz (Deutsch):License LogoCreative Commons - Namensnennung-Keine kommerzielle Nutzung-Weitergabe unter gleichen Bedingungen
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.