Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)
The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 6 of 30
Back to Result List

Multi-threaded Kernel Offloading to GPGPU Using Hyper-Q on Kepler Architecture

Please always quote using this URN: urn:nbn:de:0297-zib-50362
  • Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution. We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each. By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture.

Download full text files

Export metadata

Metadaten
Author:Florian WendeORCiD, Thomas Steinke, Frank Cordes
Document Type:ZIB-Report
Tag:Concurrent Kernel Execution; GPGPU; Hyper-Q
MSC-Classification:00-XX GENERAL
CCS-Classification:B. Hardware
PACS-Classification:80.00.00 INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY
Date of first Publication:2014/02/06
Series (Serial Number):ZIB-Report (14-19)
ISSN:1438-0064
Licence (German):License LogoCreative Commons - Namensnennung-Keine kommerzielle Nutzung-Weitergabe unter gleichen Bedingungen
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.