TY - CHAP A1 - Wende, Florian A1 - Laubender, Guido A1 - Steinke, Thomas T1 - Integration of Intel Xeon Phi Servers into the HLRN-III Complex: Experiences, Performance and Lessons Learned T2 - CUG2014 Proceedings Y1 - 2014 ER - TY - CHAP A1 - Wende, Florian A1 - Steinke, Thomas T1 - Swendsen-Wang Multi-Cluster Algorithm for the 2D/3D Ising Model on Xeon Phi and GPU T2 - Proceeding SC '13 Proceedings of SC13: International Conference for High Performance Computing, Networking, Storage and Analysis Article No. 83 ACM New York, NY, USA, 2013 Y1 - 2013 U6 - https://doi.org/http://dx.doi.org/10.1145/2503210.2503254 ER - TY - CHAP A1 - Noack, Matthias A1 - Wende, Florian A1 - Steinke, Thomas A1 - Cordes, Frank T1 - A Unified Programming Model for Intra- and Inter-Node Offloading on Xeon Phi Clusters T2 - SC '14: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. SC14, November 16-21, 2014, New Orleans, Louisiana, USA N2 - Standard offload programming models for the Xeon Phi, e.g. Intel LEO and OpenMP 4.0, are restricted to a single compute node and hence a limited number of coprocessors. Scaling applications across a Xeon Phi cluster/supercomputer thus requires hybrid programming approaches, usually MPI+X. In this work, we present a framework based on heterogeneous active messages (HAM-Offload) that provides the means to offload work to local and remote (co)processors using a unified offload API. Since HAM-Offload provides similar primitives as current local offload frameworks, existing applications can be easily ported to overcome the single-node limitation while keeping the convenient offload programming model. We demonstrate the effectiveness of the framework by using it to enable a real-world application from the field of molecular dynamics to use multiple local and remote Xeon Phis. The evaluation shows good scaling behavior. Compared with LEO, performance is equal for large offloads and significantly better for small offloads. Y1 - 2014 UR - http://dl.acm.org/citation.cfm?id=2683616 U6 - https://doi.org/10.1109/SC.2014.22 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Klemm, Michael A1 - Reinefeld, Alexander ED - Reinders, James ED - Jeffers, Jim T1 - Concurrent Kernel Offloading T2 - High Performance Parallelism Pearls Y1 - 2014 UR - http://store.elsevier.com/High-Performance-Parallelism-Pearls/James-Reinders/isbn-9780128021187/ SN - 978-0128021187 N1 - Target publication date: Nov, 2014 PB - Morgan Kaufman, Elsevier ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Reinefeld, Alexander T1 - The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing N2 - With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints during the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hardware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance. The latter might be only (partially) beneficial when placing multiple applications with different computational characteristics on the same node. T3 - ZIB-Report - 15-05 KW - Fault-tolerance KW - Process placement KW - Oversubscription Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-53560 SN - 1438-0064 ER - TY - GEN A1 - Wende, Florian A1 - Laubender, Guido A1 - Steinke, Thomas T1 - Integration of Intel Xeon Phi Servers into the HLRN-III Complex: Experiences, Performance and Lessons Learned N2 - The third generation of the North German Supercomputing Alliance (HLRN) compute and storage facilities comprises a Cray XC30 architecture with exclusively Intel Ivy Bridge compute nodes. In the second phase, scheduled for November 2014, the HLRN-III configuration will undergo a substantial upgrade together with the option of integrating accelerator nodes into the system. To support the decision-making process, a four-node Intel Xeon Phi cluster is integrated into the present HLRN-III infrastructure at ZIB. This integration includes user/project management, file system access and job management via the HLRN-III batch system. For selected workloads, in-depth analysis, migration and optimization work on Xeon Phi is in progress. We will report our experiences and lessons learned within the Xeon Phi installation and integration process. For selected examples, initial results of the application evaluation on the Xeon Phi cluster platform will be discussed. T3 - ZIB-Report - 14-15 KW - Performance and usage measurement KW - System management KW - System integration KW - Xeon Phi cluster Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-49990 UR - https://cug.org/proceedings/cug2014_proceedings/includes/files/pap194-file2.pdf SN - 1438-0064 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Cordes, Frank T1 - Multi-threaded Kernel Offloading to GPGPU Using Hyper-Q on Kepler Architecture N2 - Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution. We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each. By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture. T3 - ZIB-Report - 14-19 KW - GPGPU KW - Hyper-Q KW - Concurrent Kernel Execution Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-50362 SN - 1438-0064 ER - TY - CHAP A1 - Wende, Florian A1 - Steinke, Thomas A1 - Reinefeld, Alexander ED - Gray, A. ED - Smith, L. ED - Weiland, M. T1 - The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing T2 - Proceedings of the 3rd International Conference on Exascale Applications and Software, EASC 2015 Y1 - 2015 SN - 978 -0-9 926615 -1-9 SP - 13 EP - 18 PB - The University of Edinburgh ER - TY - CHAP A1 - Noack, Matthias A1 - Wende, Florian A1 - Oertel, Klaus-Dieter ED - Reinders, James ED - Jeffers, Jim T1 - OpenCL: There and Back Again T2 - High Performance Parallelism Pearls Y1 - 2015 SN - 978-0-12-803819-2 VL - 2 SP - 355 EP - 378 PB - Morgan Kaufman, Elsevier ER - TY - JOUR A1 - Alhaddad, Samer A1 - Förstner, Jens A1 - Groth, Stefan A1 - Grünewald, Daniel A1 - Grynko, Yevgen A1 - Hannig, Frank A1 - Kenter, Tobias A1 - Pfreundt, F.J. A1 - Plessl, Christian A1 - Schotte, Merlind A1 - Steinke, Thomas A1 - Teich, J. A1 - Weiser, Martin A1 - Wende, Florian T1 - The HighPerMeshes Framework for Numerical Algorithms on Unstructured Grids JF - Concurrency and Computation: Practice and Experience N2 - Solving PDEs on unstructured grids is a cornerstone of engineering and scientific computing. Heterogeneous parallel platforms, including CPUs, GPUs, and FPGAs, enable energy-efficient and computationally demanding simulations. In this article, we introduce the HPM C++-embedded DSL that bridges the abstraction gap between the mathematical formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different programming models on the other hand. Thus, the HPM DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HPM DSL, and demonstrate its usage with three examples. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters, is presented. A code generator and a matching back end allow the acceleration of HPM code with GPUs. Finally, the achievable performance and scalability are demonstrated for different example problems. Y1 - 2022 U6 - https://doi.org/10.1002/cpe.6616 VL - 34 IS - 14 ER -