TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Klemm, Michael A1 - Reinefeld, Alexander ED - Reinders, James ED - Jeffers, Jim T1 - Concurrent Kernel Offloading T2 - High Performance Parallelism Pearls Y1 - 2014 UR - http://store.elsevier.com/High-Performance-Parallelism-Pearls/James-Reinders/isbn-9780128021187/ SN - 978-0128021187 N1 - Target publication date: Nov, 2014 PB - Morgan Kaufman, Elsevier ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Reinefeld, Alexander T1 - The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing N2 - With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints during the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hardware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance. The latter might be only (partially) beneficial when placing multiple applications with different computational characteristics on the same node. T3 - ZIB-Report - 15-05 KW - Fault-tolerance KW - Process placement KW - Oversubscription Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-53560 SN - 1438-0064 ER - TY - GEN A1 - Wende, Florian A1 - Laubender, Guido A1 - Steinke, Thomas T1 - Integration of Intel Xeon Phi Servers into the HLRN-III Complex: Experiences, Performance and Lessons Learned N2 - The third generation of the North German Supercomputing Alliance (HLRN) compute and storage facilities comprises a Cray XC30 architecture with exclusively Intel Ivy Bridge compute nodes. In the second phase, scheduled for November 2014, the HLRN-III configuration will undergo a substantial upgrade together with the option of integrating accelerator nodes into the system. To support the decision-making process, a four-node Intel Xeon Phi cluster is integrated into the present HLRN-III infrastructure at ZIB. This integration includes user/project management, file system access and job management via the HLRN-III batch system. For selected workloads, in-depth analysis, migration and optimization work on Xeon Phi is in progress. We will report our experiences and lessons learned within the Xeon Phi installation and integration process. For selected examples, initial results of the application evaluation on the Xeon Phi cluster platform will be discussed. T3 - ZIB-Report - 14-15 KW - Performance and usage measurement KW - System management KW - System integration KW - Xeon Phi cluster Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-49990 UR - https://cug.org/proceedings/cug2014_proceedings/includes/files/pap194-file2.pdf SN - 1438-0064 ER - TY - CHAP A1 - Dreßler, Sebastian A1 - Steinke, Thomas T1 - An Automated Approach for Estimating the Memory Footprint of Non-linear Data Objects T2 - Euro-Par 2013: Parallel Processing Workshops Y1 - 2014 U6 - https://doi.org/10.1007/978-3-642-54420-0_25 VL - 8374 SP - 249 EP - 258 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Cordes, Frank T1 - Multi-threaded Kernel Offloading to GPGPU Using Hyper-Q on Kepler Architecture N2 - Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution. We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each. By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture. T3 - ZIB-Report - 14-19 KW - GPGPU KW - Hyper-Q KW - Concurrent Kernel Execution Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-50362 SN - 1438-0064 ER - TY - JOUR A1 - Krüger, Jens A1 - Grunzke, Richard A1 - Gesing, Sandra A1 - Breuers, Sebastian A1 - Brinkmann, André A1 - de la Garza, Luis A1 - Kohlbacher, Oliver A1 - Kruse, Martin A1 - Nagel, Wolfgang A1 - Packschies, Lars A1 - Müller-Pfefferkorn, Ralph A1 - Schäfer, Patrick A1 - Schärfe, Charlotta A1 - Steinke, Thomas A1 - Schlemmer, Tobias A1 - Warzecha, Klaus Dieter A1 - Zink, Andreas A1 - Herres-Pawlis, Sonja T1 - The MoSGrid Science Gateway - A Complete Solution for Molecular Simulations JF - Journal of Chemical Theory and Computation Y1 - 2014 U6 - https://doi.org/10.1021/ct500159h VL - 10 IS - 6 SP - 2232 EP - 2245 ER - TY - CHAP A1 - Wende, Florian A1 - Steinke, Thomas A1 - Reinefeld, Alexander ED - Gray, A. ED - Smith, L. ED - Weiland, M. T1 - The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing T2 - Proceedings of the 3rd International Conference on Exascale Applications and Software, EASC 2015 Y1 - 2015 SN - 978 -0-9 926615 -1-9 SP - 13 EP - 18 PB - The University of Edinburgh ER - TY - JOUR A1 - Grunzke, Richard A1 - Breuers, Sebastian A1 - Gesing, Sandra A1 - Herres-Pawlis, Sonja A1 - Kruse, Martin A1 - Blunk, Dirk A1 - de la Garza, Luis A1 - Packschies, Lars A1 - Schäfer, Patrick A1 - Schärfe, Charlotta A1 - Schlemmer, Tobias A1 - Steinke, Thomas A1 - Schuller, Bernd A1 - Müller-Pfefferkorn, Ralph A1 - Jäkel, René A1 - Nagel, Wolfgang A1 - Atkinson, Malcolm A1 - Krüger, Jens T1 - Standards-based metadata management for molecular simulations JF - Concurrency and Computation: Practice and Experience Y1 - 2013 U6 - https://doi.org/10.1002/cpe.3116 ER - TY - CHAP A1 - Weinhold, Carsten A1 - Lackorzynski, Adam A1 - Bierbaum, Jan A1 - Küttler, Martin A1 - Planeta, Maksym A1 - Härtig, Hermann A1 - Shiloh, Amnon A1 - Levy, Ely A1 - Ben-Nun, Tal A1 - Barak, Amnon A1 - Steinke, Thomas A1 - Schütt, Thorsten A1 - Fajerski, Jan A1 - Reinefeld, Alexander A1 - Lieber, Matthias A1 - Nagel, Wolfgang T1 - FFMK: A Fast and Fault-tolerant Microkernel-based System for Exascale Computing T2 - SPPEXA Symposium 2016 Y1 - 2016 U6 - https://doi.org/10.1007/978-3-319-40528-5_18 ER - TY - CHAP A1 - Fajerski, J. A1 - Noack, Matthias A1 - Reinefeld, Alexander A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Steinke, Thomas T1 - Fast In-Memory Checkpointing with POSIX API for Legacy Exascale-Applications T2 - SPPEXA Symposium 2016 Y1 - 2016 U6 - https://doi.org/10.1007/978-3-319-40528-5_19 ER -