TY - CHAP A1 - Christgau, Steffen A1 - Steinke, Thomas T1 - Porting a Legacy CUDA Stencil Code to oneAPI T2 - 2020 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2020, New Orleans, LA, USA, May 18-22, 2020 N2 - Recently, Intel released the oneAPI programming environment. With Data Parallel C++ (DPC++), oneAPI enables codes to target multiple hardware architectures like multi-core CPUs, GPUs, and even FPGAs or other hardware using a single source. For legacy codes that were written for Nvidia GPUs, a compatibility tool is provided which facilitates the transition to the SYCL-based DPC++ programming language. This paper presents early experiences when using both the compatibility tool and oneAPI as well the employed extension to the SYCL programming standard for the tsunami simulation code easyWave. A performance study compares the original code running on Xeon processors using OpenMP as well as CUDA with the performance of the DPC++ counter part on multicore CPUs as well as integrated GPUs. Y1 - 2020 SN - 978-1-7281-7445-7 U6 - https://doi.org/10.1109/IPDPSW50202.2020.00070 SP - 359 EP - 367 PB - IEEE CY - New Orleans ER - TY - JOUR A1 - Christgau, Steffen A1 - Schnor, Bettina T1 - Comparing MPI Passive Target Synchronization on a Non-Cache-Coherent Shared-Memory Processor JF - Mitteilungen - Gesellschaft für Informatik e. V., Parallel-Algorithmen und Rechnerstrukturen, ISSN 0177-0454, 28. PARS-Workshop Y1 - 2020 IS - 35 SP - 121 EP - 132 ER - TY - CHAP A1 - Christgau, Steffen A1 - Steinke, Thomas T1 - Leveraging a Heterogeneous Memory System for a Legacy Fortran Code: The Interplay of Storage Class Memory, DRAM and OS T2 - 2020 IEEE/ACM Workshop on Memory Centric High Performance Computing (MCHPC) N2 - Large capacity Storage Class Memory (SCM) opens new possibilities for workloads requiring a large memory footprint. We examine optimization strategies for a legacy Fortran application on systems with an heterogeneous memory configuration comprising SCM and DRAM. We present a performance study for the multigrid solver component of the large-eddy simulation framework PALM for different memory configurations with large capacity SCM. An important optimization approach is the explicit assignment of storage locations depending on the data access characteristic to take advantage of the heterogeneous memory configuration. We are able to demonstrate that an explicit control over memory locations provides better performance compared to transparent hardware settings. As on aforementioned systems the page management by the OS appears as critical performance factor, we study the impact of different huge page settings. Y1 - 2020 SN - 978-0-7381-1067-7 U6 - https://doi.org/10.1109/MCHPC51950.2020.00008 SP - 17 EP - 24 PB - IEEE ER - TY - GEN A1 - Christgau, Steffen A1 - Schnor, Bettina T1 - MPI Passive Target Synchronization on a Non-Cache-Coherent Shared-Memory Processor N2 - MPI passive target synchronization offers exclusive and shared locks. These are the building blocks for the implementation of applications with Readers & Writers semantic, like for example distributed hash tables. This paper discusses the implementation of MPI passive target synchronization on a non-cache-coherent multicore, the Intel Single-Chip Cloud Computer. The considered algorithms differ in their communication style (message based versus shared memory), their data structures (centralized versus distributed) and their semantics (with/without Writer preference). It is shown that shared memory approaches scale very well and deliver good performance, even in absence of cache coherence. T3 - ZIB-Report - 19-52 KW - process synchronization KW - programming models and systems for manycores KW - MPI Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-74774 SN - 1438-0064 ER - TY - CHAP A1 - Brook, Glenn A1 - Fuller, Douglas A1 - Swinburne, John A1 - Christgau, Steffen A1 - Läuter, Matthias A1 - Rodrigues Pelá, Ronaldo A1 - Lewin, Stein A1 - Christian, Tuma A1 - Steinke, Thomas T1 - An Early Scalability Study of Omni-Path Express N2 - This work provides a brief description of Omni-Path Express and the current status of its development, stability, and performance. Basic benchmarks that highlight the gains of OPX over PSM2 are provided, and the results of an initial performance and scalability study of several applications are presented. Y1 - 2022 UR - https://www.ixpug.org/events/isc22-ixpug-workshop U6 - https://doi.org/10.13140/RG.2.2.21353.57442 CY - Hamburg ER - TY - CHAP A1 - Christgau, Steffen A1 - Knaust, Marius A1 - Steinke, Thomas T1 - A First Step towards Support for MPI Partitioned Communication on SYCL-programmed FPGAs T2 - IEEE/ACM International Workshop on Heterogeneous High-performance Reconfigurable Computing, H2RC@SC 2022, Dallas, TX, USA, November 13-18, 2022 N2 - Version 4.0 of the Message Passing Interface standard introduced the concept of Partitioned Communication which adds support for multiple contributions to a communication buffer. Although initially targeted at multithreaded MPI applications, Partitioned Communication currently receives attraction in the context of accelerators, especially GPUs. In this publication it is demonstrated that this communication concept can also be implemented for SYCL-programmed FPGAs. This includes a discussion of the design space and the presentation of a prototypical implementation. Experimental results show that a lightweight implementation on top of an existing MPI library is possible. In addition, the presented approach also reveals issues in both the SYCL and the MPI standard which need to be addresses for improved support of the intended communication style. Y1 - 2022 U6 - https://doi.org/10.1109/H2RC56700.2022.00007 SP - 9 EP - 17 PB - IEEE ER - TY - CHAP A1 - Christgau, Steffen A1 - Everingham, Dylan A1 - Mikolajczak, Florian A1 - Schelten, Niklas A1 - Schnor, Bettina A1 - Schroetter, Max A1 - Stabernack, Benno A1 - Steinert, Fritjof T1 - Enabling Communication with FPGA-based Network-attached Accelerators for HPC Workloads T2 - Proceedings of the SC'23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, SC-W 2023, Denver, CO, USA, November 12-17, 2023 N2 - The use of stand-alone, network-coupled Field Programmable Gate Array (FPGA) accelerators is intended to significantly increase the energy efficiency of HPC applications and thus also of HPC data centers. A loose coupling between the nodes of the HPC data center and the FPGAs is established through the high-speed network of the data center. This allows greater flexibility in combining different nodes and accelerators. Both the resulting energy savings and the increased flexibility through the network connection, enable the economical use of FPGAs. This work presents a communication stack to integrate the so-called Network-attached Accelerator (NAA) into the HPC data center. A low-level Remote Direct Memory Access (RDMA) Application Programming Interface (API) and a high-level Remote Procedure Call (RPC) API is designed on top of the RDMA over Converged Ethernet v2 (RoCEv2) communication stack. The experimental results over 100 Gbps RoCEv2 show that our design and implementation deliver performance close to the theoretical maximum. Y1 - 2023 U6 - https://doi.org/10.1145/3624062.3624540 SP - 530 EP - 538 PB - ACM ER - TY - CHAP A1 - Skoblin, Viktor A1 - Höfling, Felix A1 - Christgau, Steffen T1 - Gaining Cross-Platform Parallelism for HAL’s Molecular Dynamics Package using SYCL T2 - 29. PARS-Workshop 2023 N2 - Molecular dynamics simulations are one of the methods in scientific computing that benefit from GPU acceleration. For those devices, SYCL is a promising API for writing portable codes. In this paper, we present the case study of HAL’s MD package that has been successfully migrated from CUDA to SYCL. We describe the different strategies that we followed in the process of porting the code. Following these strategies, we achieved code portability across major GPU vendors. Depending on the actual kernels, both significant performance improvements and regressions are observed. As a side effect of the migration process, we obtained impressing speedups also for execution on CPUs. Y1 - 2023 SN - 0177-0454 VL - 36 ER -