Refine
Document Type
- In Proceedings (2)
- Article (1)
Language
- English (3)
Has Fulltext
- no (3)
Is part of the Bibliography
- no (3)
Institute
With the recent installation of Cori, a Cray XC40 system with Intel Xeon Phi Knights Landing (KNL) many integrated core (MIC) architecture, NERSC is transitioning from the multi-core to the more energy-efficient many-core era. The developers of VASP, a widely used materials science code, have adopted MPI/OpenMP parallelism to better exploit the increased on-node parallelism, wider vector units, and the high bandwidth on-package memory (MCDRAM) of KNL. To achieve optimal performance, KNL specifics relevant for the build, boot and run time setup must be explored. In this paper, we present the performance analysis of representative VASP workloads on Cori, focusing on the effects of the compilers, libraries, and boot/run time options such as the
NUMA/MCDRAM modes, Hyper-Threading, huge pages, core specialization, and thread scaling. The paper is intended to serve as a KNL performance guide for VASP users, but it will also benefit other KNL users.
We describe for the VASP application (a widely used electronic structure code written in FORTRAN) the transition from an MPI-only to a hybrid code base leveraging the three relevant levels of parallelism to be addressed when optimizing for an effective execution on modern computer platforms: multiprocessing, multithreading and SIMD vectorization. To achieve code portability, we draw on MPI parallelization together with OpenMP threading and SIMD constructs. Combining the latter can be challenging in complex code bases. Optimization targets are combining multithreading and vectorization in different calling contexts as well as whole function vectorization. In addition to outlining design decisions made throughout the code transformation process, we will demonstrate the effectiveness of the code adaptations using different compilers (GNU, Intel) and target platforms (CPU, Intel Xeon Phi (KNL)).
The Vienna Ab initio Simulation Package (VASP) is a widely used electronic structure code that originally exploits process-level parallelism through the Message Passing Interface (MPI) for work distribution within and across nodes.
Architectural changes of modern parallel processors urge programmers to address thread- and data-level parallelism as well to benefit most from the available compute resources within a node.
We describe for VASP how to approach for an MPI + OpenMP parallelization including data-level parallelism through OpenMP SIMD constructs together with a generic high-level vector coding scheme.
We can demonstrate an improved scalability of VASP and more than 20% gain over the MPI-only version, as well as a 2x increased performance of collective operations using the multiple-endpoint MPI feature.
The high-level vector coding scheme applied to VASP's general gradient approximation routine gives up 9x performance gain on AVX512 platforms with the Intel compiler.