TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Klemm, Michael A1 - Reinefeld, Alexander ED - Reinders, James ED - Jeffers, Jim T1 - Concurrent Kernel Offloading T2 - High Performance Parallelism Pearls Y1 - 2014 UR - http://store.elsevier.com/High-Performance-Parallelism-Pearls/James-Reinders/isbn-9780128021187/ SN - 978-0128021187 N1 - Target publication date: Nov, 2014 PB - Morgan Kaufman, Elsevier ER - TY - CHAP A1 - Wende, Florian A1 - Cordes, Frank A1 - Steinke, Thomas T1 - On Improving the Performance of Multi-threaded CUDA Applications with Concurrent Kernel Execution by Kernel Reordering T2 - Application Accelerators in High Performance Computing (SAAHPC), 2012 Symposium on Y1 - 2012 U6 - https://doi.org/10.1109/SAAHPC.2012.12 SP - 74 EP - 83 ER - TY - GEN A1 - Dreßler, Sebastian A1 - Steinke, Thomas T1 - An Automated Approach for Estimating the Memory Footprint of Non-Linear Data Objects N2 - Current programming models for heterogeneous devices with disjoint physical memory spaces require explicit allocation of device memory and explicit data transfers. While it is quite easy to manually implement these operations for linear data objects like arrays, this task becomes more difficult for non-linear objects, e.g. linked lists or multiple inherited classes. The difficulties arise due to dynamic memory requirements at run-time and the dependencies between data structures. In this paper we present a novel method to build a graph-based static data type description which is used to create code for injectable functions that automatically determine the memory footprint of data objects at run-time. Our approach is extensible to implement automatically generated optimized data transfers across physical memory spaces. T3 - ZIB-Report - 13-46 KW - memory footprint KW - non-linear objects KW - static analysis KW - dynamic analysis Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42224 SN - 1438-0064 ER - TY - GEN A1 - Dreßler, Sebastian A1 - Steinke, Thomas T1 - A Novel Hybrid Approach to Automatically Determine Kernel Interface Data Volumes N2 - Scheduling algorithms for heterogeneous platforms make scheduling decisions based on several metrics. One of these metrics is the amount of data to be transferred from and to the accelerator. However, the automated determination of this metric is not a simple task. A few schedulers and runtime systems solve this problem by using regression models, which are imprecise though. Our novel approach for the determination of data volumes removes this limitation and thus provides a solution to obtain exact information. T3 - ZIB-Report - 12-23 KW - function parameter analysis KW - function parameter sizes KW - heterogeneous systems KW - scheduling metrics KW - tool flow Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-15569 SN - 1438-0064 ER - TY - CHAP A1 - Noack, Matthias A1 - Wende, Florian A1 - Zitzlsberger, Georg A1 - Klemm, Michael A1 - Steinke, Thomas T1 - KART - A Runtime Compilation Library for Improving HPC Application Performance T2 - High Performance Computing: ISC High Performance 2017 International Workshops, DRBSD, ExaComm, HCPM, HPC-IODC, IWOPH, IXPUG, P^3MA, VHPC, Visualization at Scale, WOPSSS, Frankfurt, Germany, June 18-22, 2017, Revised Selected Papers N2 - The effectiveness of ahead-of-time compiler optimization heavily depends on the amount of available information at compile time. Input-specific information that is only available at runtime cannot be used, although it often determines loop counts, branching predicates and paths, as well as memory-access patterns. It can also be crucial for generating efficient SIMD-vectorized code. This is especially relevant for the many-core architectures paving the way to exascale computing, which are more sensitive to code-optimization. We explore the design-space for using input-specific information at compile-time and present KART, a C++ library solution that allows developers tocompile, link, and execute code (e.g., C, C++ , Fortran) at application runtime. Besides mere runtime compilation of performance-critical code, KART can be used to instantiate the same code multiple times using different inputs, compilers, and options. Other techniques like auto-tuning and code-generation can be integrated into a KART-enabled application instead of being scripted around it. We evaluate runtimes and compilation costs for different synthetic kernels, and show the effectiveness for two real-world applications, HEOM and a WSM6 proxy. Y1 - 2017 U6 - https://doi.org/10.1007/978-3-319-67630-2_29 N1 - Best Paper Award VL - 10524 SP - 389 EP - 403 PB - Springer International Publishing ER - TY - JOUR A1 - Alhaddad, Samer A1 - Förstner, Jens A1 - Groth, Stefan A1 - Grünewald, Daniel A1 - Grynko, Yevgen A1 - Hannig, Frank A1 - Kenter, Tobias A1 - Pfreundt, Franz-Josef A1 - Plessl, Christian A1 - Schotte, Merlind A1 - Steinke, Thomas A1 - Teich, Jürgen A1 - Weiser, Martin A1 - Wende, Florian T1 - HighPerMeshes - A Domain-Specific Language for Numerical Algorithms on Unstructured Grids JF - Euro-Par 2020: Parallel Processing Workshops. N2 - Solving partial differential equations on unstructured grids is a cornerstone of engineering and scientific computing. Nowadays, heterogeneous parallel platforms with CPUs, GPUs, and FPGAs enable energy-efficient and computationally demanding simulations. We developed the HighPerMeshes C++-embedded Domain-Specific Language (DSL) for bridging the abstraction gap between the mathematical and algorithmic formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different parallel programming and runtime models on the other hand. Thus, the HighPerMeshes DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HighPer-Meshes DSL, and demonstrate its usage with three examples, a Poisson and monodomain problem, respectively, solved by the continuous finite element method, and the discontinuous Galerkin method for Maxwell’s equation. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters is presented. Finally, the achievable performance and scalability are demonstrated for a typical example problem on a multi-core CPU cluster. Y1 - 2021 U6 - https://doi.org/10.1007/978-3-030-71593-9_15 SP - 185 EP - 196 PB - Springer ER - TY - CHAP A1 - Wende, Florian A1 - Noack, Matthias A1 - Steinke, Thomas A1 - Klemm, Michael A1 - Zitzlsberger, Georg A1 - Newburn, Chris J. ED - Dutot, Pierre-Francois ED - Trystram, Denis T1 - Portable SIMD Performance with OpenMP* 4.x Compiler Directives N2 - Effective vectorization is becoming increasingly important for high performance and energy efficiency on processors with wide SIMD units. Compilers often require programmers to identify opportunities for vectorization, using directives to disprove data dependences. The OpenMP 4.x SIMD directives strive to provide portability. We investigate the ability of current compilers (GNU, Clang, and Intel) to generate SIMD code for microbenchmarks that cover common patterns in scientific codes and for two kernels from the VASP and the MOM5/ERGOM application. We explore coding strategies for improving SIMD performance across different compilers and platforms (Intel® Xeon® processor and Intel® Xeon Phi™ (co)processor). We compare OpenMP* 4.x SIMD vectorization with and without vector data types against SIMD intrinsics and C++ SIMD types. Our experiments show that in many cases portable performance can be achieved. All microbenchmarks are available as open source as a reference for programmers and compiler experts to enhance SIMD code generation. Y1 - 2016 SN - 978-3-319-43659-3 U6 - https://doi.org/10.1007/978-3-319-43659-3_20 VL - Euro-Par 2016: Parallel Processing: 22nd International Conference on Parallel and Distributed Computing PB - Springer International Publishing ER - TY - JOUR A1 - Wende, Florian A1 - Marsman, Martijn A1 - Kim, Jeongnim A1 - Vasilev, Fedor A1 - Zhao, Zhengji A1 - Steinke, Thomas T1 - OpenMP in VASP: Threading and SIMD JF - International Journal of Quantum Chemistry N2 - The Vienna Ab initio Simulation Package (VASP) is a widely used electronic structure code that originally exploits process-level parallelism through the Message Passing Interface (MPI) for work distribution within and across nodes. Architectural changes of modern parallel processors urge programmers to address thread- and data-level parallelism as well to benefit most from the available compute resources within a node. We describe for VASP how to approach for an MPI + OpenMP parallelization including data-level parallelism through OpenMP SIMD constructs together with a generic high-level vector coding scheme. We can demonstrate an improved scalability of VASP and more than 20% gain over the MPI-only version, as well as a 2x increased performance of collective operations using the multiple-endpoint MPI feature. The high-level vector coding scheme applied to VASP's general gradient approximation routine gives up 9x performance gain on AVX512 platforms with the Intel compiler. Y1 - 2018 U6 - https://doi.org/10.1002/qua.25851 IS - Emerging Architectures in Computational Chemistry SP - e25851 PB - Wiley Online Library ER - TY - CHAP A1 - Wende, Florian A1 - Noack, Matthias A1 - Schütt, Thorsten A1 - Sachs, Stephen A1 - Steinke, Thomas T1 - Application Performance on a Cray XC30 Evaluation System with Xeon Phi Coprocessors at HLRN-III T2 - Cray User Group Y1 - 2015 ER - TY - CHAP A1 - Wende, Florian A1 - Marsman, Martijn A1 - Steinke, Thomas T1 - On Enhancing 3D-FFT Performance in VASP T2 - CUG Proceedings Y1 - 2016 ER -