TY - CHAP A1 - Wende, Florian A1 - Steinke, Thomas T1 - Swendsen-Wang Multi-Cluster Algorithm for the 2D/3D Ising Model on Xeon Phi and GPU T2 - Proceeding SC '13 Proceedings of SC13: International Conference for High Performance Computing, Networking, Storage and Analysis Article No. 83 ACM New York, NY, USA, 2013 Y1 - 2013 U6 - https://doi.org/http://dx.doi.org/10.1145/2503210.2503254 ER - TY - GEN A1 - Benkrid, Khaled A1 - El-Araby, Esam A1 - Huang, Miaoqing A1 - Sano, Kentaro A1 - Steinke, Thomas T1 - High-Performance Reconfigurable Computing - editorial for Special Issue of the International Journal of Reconfigurable Computing T2 - International Journal of Reconfigurable Computing Y1 - 2012 U6 - https://doi.org/10.1155/2012/104963 VL - 2012 SP - 1 EP - 2 ER - TY - CHAP A1 - Noack, Matthias A1 - Wende, Florian A1 - Steinke, Thomas A1 - Cordes, Frank T1 - A Unified Programming Model for Intra- and Inter-Node Offloading on Xeon Phi Clusters T2 - SC '14: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. SC14, November 16-21, 2014, New Orleans, Louisiana, USA N2 - Standard offload programming models for the Xeon Phi, e.g. Intel LEO and OpenMP 4.0, are restricted to a single compute node and hence a limited number of coprocessors. Scaling applications across a Xeon Phi cluster/supercomputer thus requires hybrid programming approaches, usually MPI+X. In this work, we present a framework based on heterogeneous active messages (HAM-Offload) that provides the means to offload work to local and remote (co)processors using a unified offload API. Since HAM-Offload provides similar primitives as current local offload frameworks, existing applications can be easily ported to overcome the single-node limitation while keeping the convenient offload programming model. We demonstrate the effectiveness of the framework by using it to enable a real-world application from the field of molecular dynamics to use multiple local and remote Xeon Phis. The evaluation shows good scaling behavior. Compared with LEO, performance is equal for large offloads and significantly better for small offloads. Y1 - 2014 UR - http://dl.acm.org/citation.cfm?id=2683616 U6 - https://doi.org/10.1109/SC.2014.22 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Klemm, Michael A1 - Reinefeld, Alexander ED - Reinders, James ED - Jeffers, Jim T1 - Concurrent Kernel Offloading T2 - High Performance Parallelism Pearls Y1 - 2014 UR - http://store.elsevier.com/High-Performance-Parallelism-Pearls/James-Reinders/isbn-9780128021187/ SN - 978-0128021187 N1 - Target publication date: Nov, 2014 PB - Morgan Kaufman, Elsevier ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Reinefeld, Alexander T1 - The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing N2 - With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints during the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hardware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance. The latter might be only (partially) beneficial when placing multiple applications with different computational characteristics on the same node. T3 - ZIB-Report - 15-05 KW - Fault-tolerance KW - Process placement KW - Oversubscription Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-53560 SN - 1438-0064 ER - TY - JOUR A1 - Heinze, Rieke A1 - Dipankar, Anurag A1 - Henken, Cintia Carbajal A1 - Moseley, Christopher A1 - Sourdeval, Odran A1 - Trömel, Silke A1 - Xie, Xinxin A1 - Adamidis, Panos A1 - Ament, Felix A1 - Baars, Holger A1 - Barthlott, Christian A1 - Behrendt, Andreas A1 - Blahak, Ulrich A1 - Bley, Sebastian A1 - Brdar, Slavko A1 - Brueck, Matthias A1 - Crewell, Susanne A1 - Deneke, Hartwig A1 - Di Girolamo, Paolo A1 - Evaristo, Raquel A1 - Fischer, Jürgen A1 - Frank, Christopher A1 - Friederichs, Petra A1 - Göcke, Tobias A1 - Gorges, Ksenia A1 - Hande, Luke A1 - Hanke, Moritz A1 - Hansen, Akio A1 - Hege, Hans-Christian A1 - Hose, Corinna A1 - Jahns, Thomas A1 - Kalthoff, Norbert A1 - Klocke, Daniel A1 - Kneifel, Stefan A1 - Knippertz, Peter A1 - Kuhn, Alexander A1 - van Laar, Thriza A1 - Macke, Andreas A1 - Maurer, Vera A1 - Mayer, Bernhard A1 - Meyer, Catrin I. A1 - Muppa, Shravan K. A1 - Neggers, Roeland A. J. A1 - Orlandi, Emiliano A1 - Pantillon, Florian A1 - Pospichal, Bernhard A1 - Röber, Niklas A1 - Scheck, Leonhard A1 - Seifert, Axel A1 - Seifert, Patric A1 - Senf, Fabian A1 - Siligam, Pavan A1 - Simmer, Clemens A1 - Steinke, Sandra A1 - Stevens, Bjorn A1 - Wapler, Kathrin A1 - Weniger, Michael A1 - Wulfmeyer, Volker A1 - Zängl, Günther A1 - Zhang, Dan A1 - Quaas, Johannes T1 - Large-eddy simulations over Germany using ICON: a comprehensive evaluation JF - Quarterly Journal of the Royal Meteorological Society N2 - Large-eddy simulations (LES) with the new ICOsahedral Non-hydrostatic atmosphere model (ICON) covering Germany are evaluated for four days in spring 2013 using observational data from various sources. Reference simulations with the established Consortium for Small-scale Modelling (COSMO) numerical weather prediction model and further standard LES codes are performed and used as a reference. This comprehensive evaluation approach covers multiple parameters and scales, focusing on boundary-layer variables, clouds and precipitation. The evaluation points to the need to work on parametrizations influencing the surface energy balance, and possibly on ice cloud microphysics. The central purpose for the development and application of ICON in the LES configuration is the use of simulation results to improve the understanding of moist processes, as well as their parametrization in climate models. The evaluation thus aims at building confidence in the model's ability to simulate small- to mesoscale variability in turbulence, clouds and precipitation. The results are encouraging: the high-resolution model matches the observed variability much better at small- to mesoscales than the coarser resolved reference model. In its highest grid resolution, the simulated turbulence profiles are realistic and column water vapour matches the observed temporal variability at short time-scales. Despite being somewhat too large and too frequent, small cumulus clouds are well represented in comparison with satellite data, as is the shape of the cloud size spectrum. Variability of cloud water matches the satellite observations much better in ICON than in the reference model. In this sense, it is concluded that the model is fit for the purpose of using its output for parametrization development, despite the potential to improve further some important aspects of processes that are also parametrized in the high-resolution model. Y1 - 2017 U6 - https://doi.org/10.1002/qj.2947 VL - 143 IS - 702 SP - 69 EP - 100 ER - TY - GEN A1 - Wende, Florian A1 - Laubender, Guido A1 - Steinke, Thomas T1 - Integration of Intel Xeon Phi Servers into the HLRN-III Complex: Experiences, Performance and Lessons Learned N2 - The third generation of the North German Supercomputing Alliance (HLRN) compute and storage facilities comprises a Cray XC30 architecture with exclusively Intel Ivy Bridge compute nodes. In the second phase, scheduled for November 2014, the HLRN-III configuration will undergo a substantial upgrade together with the option of integrating accelerator nodes into the system. To support the decision-making process, a four-node Intel Xeon Phi cluster is integrated into the present HLRN-III infrastructure at ZIB. This integration includes user/project management, file system access and job management via the HLRN-III batch system. For selected workloads, in-depth analysis, migration and optimization work on Xeon Phi is in progress. We will report our experiences and lessons learned within the Xeon Phi installation and integration process. For selected examples, initial results of the application evaluation on the Xeon Phi cluster platform will be discussed. T3 - ZIB-Report - 14-15 KW - Performance and usage measurement KW - System management KW - System integration KW - Xeon Phi cluster Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-49990 UR - https://cug.org/proceedings/cug2014_proceedings/includes/files/pap194-file2.pdf SN - 1438-0064 ER - TY - CHAP A1 - Dreßler, Sebastian A1 - Steinke, Thomas T1 - An Automated Approach for Estimating the Memory Footprint of Non-linear Data Objects T2 - Euro-Par 2013: Parallel Processing Workshops Y1 - 2014 U6 - https://doi.org/10.1007/978-3-642-54420-0_25 VL - 8374 SP - 249 EP - 258 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Cordes, Frank T1 - Multi-threaded Kernel Offloading to GPGPU Using Hyper-Q on Kepler Architecture N2 - Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution. We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each. By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture. T3 - ZIB-Report - 14-19 KW - GPGPU KW - Hyper-Q KW - Concurrent Kernel Execution Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-50362 SN - 1438-0064 ER - TY - JOUR A1 - Krüger, Jens A1 - Grunzke, Richard A1 - Gesing, Sandra A1 - Breuers, Sebastian A1 - Brinkmann, André A1 - de la Garza, Luis A1 - Kohlbacher, Oliver A1 - Kruse, Martin A1 - Nagel, Wolfgang A1 - Packschies, Lars A1 - Müller-Pfefferkorn, Ralph A1 - Schäfer, Patrick A1 - Schärfe, Charlotta A1 - Steinke, Thomas A1 - Schlemmer, Tobias A1 - Warzecha, Klaus Dieter A1 - Zink, Andreas A1 - Herres-Pawlis, Sonja T1 - The MoSGrid Science Gateway - A Complete Solution for Molecular Simulations JF - Journal of Chemical Theory and Computation Y1 - 2014 U6 - https://doi.org/10.1021/ct500159h VL - 10 IS - 6 SP - 2232 EP - 2245 ER -