Refine
Document Type
- In Proceedings (9)
- Article (7)
- ZIB-Report (4)
- Book chapter (2)
Language
- English (22)
Is part of the Bibliography
- no (22)
Keywords
- 67p (1)
- Active message (1)
- HAM (1)
- Intel Xeon Phi (1)
- LEO (1)
- MIC (1)
- MPI (1)
- Offloading (1)
- SCIF (1)
- comet (1)
The computational power and availability of graphics processing units (GPUs), such as the Nvidia Tesla, and Many Integrated Core (MIC) processors, such as the Intel Xeon Phi, on high performance computing (HPC) systems is rapidly evolving. However, HPC applications need to be ported to take advantage of such hardware. This paper is a report on our experience of porting the MPI+OpenMP parallelised large-eddy simulation model (PALM) to multi-GPU as well as to MIC processor environments using the directive-based high level programming paradigm OpenACC and OpenMP, respectively. PALM is a Fortran-based computational fluid dynamics software package, used for the simulation of atmospheric and oceanic boundary layers to answer questions linked to fundamental atmospheric turbulence research, urban modelling, aircraft safety and cloud physics. Development of PALM started in 1997, the project currently entails 140 kLOC and is used on HPC farms of up to 43,200 cores. The main challenges we faced during the porting process are the size and complexity of the PALM code base, its inconsistent modularisation and the complete lack of a unit-test suite. We report the methods used to identify performance issues as well as our experiences with state-of-the-art profiling tools. Moreover, we outline the required porting steps in order to properly execute our code on GPUs and MIC processors, describe the problems and bottlenecks that we encountered during the porting process, and present separate performance tests for both architectures. These performance tests, however, do not provide any benchmark information that compares the performance of the ported code between the two architectures.
The effectiveness of ahead-of-time compiler optimization heavily depends on the amount of available information at compile time. Input-specific information that is only available at runtime cannot be used, although it often determines loop counts, branching predicates and paths, as well as memory-access patterns. It can also be crucial for generating efficient SIMD-vectorized code. This is especially relevant for the many-core architectures paving the way to exascale computing, which are more sensitive to code-optimization. We explore the design-space for using input-specific information at compile-time and present KART, a C++ library solution that allows developers to compile, link, and execute code (e.g., C, C++ , Fortran) at application runtime. Besides mere runtime compilation of performance-critical code, KART can be used to instantiate the same code multiple times using different inputs, compilers, and options. Other techniques like auto-tuning and code-generation can be integrated into a KART-enabled application instead of being scripted around it. We evaluate runtimes and compilation costs for different synthetic kernels, and show the effectiveness for two real-world applications, HEOM and a WSM6 proxy.
Effective vectorization is becoming increasingly important for high performance and energy efficiency on processors with wide SIMD units. Compilers often require programmers to identify opportunities for vectorization, using directives to disprove data dependences. The OpenMP 4.x SIMD directives strive to provide portability. We investigate the ability of current compilers (GNU, Clang, and Intel) to generate SIMD code for microbenchmarks that cover common patterns in scientific codes and for two kernels from the VASP and the MOM5/ERGOM application. We explore coding strategies for improving SIMD performance across different compilers and platforms (Intel® Xeon® processor and Intel® Xeon Phi™ (co)processor). We compare OpenMP* 4.x SIMD vectorization with and without vector data types against SIMD intrinsics and C++ SIMD types. Our experiments show that in many cases portable performance can be achieved. All microbenchmarks are available as open source as a reference for programmers and compiler experts to enhance SIMD code generation.
The Rosetta probe around comet 67P/Churyumov–Gerasimenko (67P) reveals an anisotropic dust distribution of the inner coma with jet-like structures. The physical processes leading to jet formation are under debate, with most models for cometary activity focusing on localized emission sources, such as cliffs or terraced regions. Here we suggest, by correlating high-resolution simulations of the dust environment around 67P with observations, that the anisotropy and the background dust density of 67P originate from dust released across the entire sunlit surface of the nucleus rather than from few isolated sources. We trace back trajectories from coma regions with high local dust density in space to the non-spherical nucleus and identify two mechanisms of jet formation: areas with local concavity in either two dimensions or only one. Pits and craters are examples of the first case; the neck region of the bi-lobed nucleus of 67P is an example of the latter case. The conjunction of multiple sources, in addition to dust released from all other sunlit areas, results in a high correlation coefficient (~0.8) of the predictions with observations during a complete diurnal rotation period of 67P.
The Rosetta probe around comet 67P/Churyumov-Gerasimenko (67P) reveals an anisotropic dust distribution of the inner coma with jet-like structures. The physical processes leading to jet formation are under debate, with most models for cometary activity focusing on localised emission sources, such as cliffs or terraced regions. Here we suggest, by correlating high-resolution simulations of the dust environment around 67P with observations, that the anisotropy and the background dust density of 67P originate from dust released across the entire sunlit surface of the nucleus rather than from few isolated sources. We trace back trajectories from coma regions with high local dust density in space to the non-spherical nucleus and identify two mechanisms of jet formation: areas with local concavity in either two dimensions or only one. Pits and craters are examples of the first case, the neck region of the bilobed nucleus of 67P for the latter one. The conjunction of multiple sources in addition to dust released from all other sunlit areas results in a high correlation coefficient (∼0.8) of the predictions with observations during a complete diurnal rotation period of 67P.
We compute trajectories of dust grains starting from a homogeneous surface activity-profile on a irregularly shaped cometary nucleus. Despite the initially homogeneous dust distribution a collimation in jet-like structures becomes visible. The fine structure is caused by concave topographical features with similar bundles of normal vectors. The model incorporates accurately determined gravitational forces, rotation of the nucleus, and gas-dust interaction. Jet-like dust structures are obtained for a wide range of gas-dust interactions. For the comet 67P/Churyumov-Gerasimenko, we derive the global dust distribution around the nucleus and find several areas of agreement between the homogeneous dust emission model and the Rosetta observation of dust jets, including velocity-dependent bending of trajectories.
Computing the Hierarchical Equations of Motion (HEOM) is by itself a challenging problem, and so is writing portable production code that runs efficiently on a variety of architectures while scaling from PCs to supercomputers. We combined both challenges to push the boundaries of simulating quantum systems, and to evaluate and improve methodologies for scientific software engineering.
Our contributions are threefold: We present the first distributed memory implementation of the HEOM method (DM-HEOM), we describe an interdisciplinary development workflow, and we provide guidelines and experiences for designing distributed, performance-portable HPC applications with MPI-3, OpenCL and other state-of-the-art programming models. We evaluated the resulting code on multi- and many-core CPUs as well as GPUs, and demonstrate scalability on a Cray XC40 supercomputer for the PS I molecular light harvesting complex.
Time- and frequency resolved optical signals provide insights into the properties of light harvesting molecular complexes, including excitation energies, dipole strengths and orientations, as well as in the exciton energy flow through the complex. The hierarchical equations of motion (HEOM) provide a unifying theory, which allows one to study the combined effects of system-environment dissipation and non-Markovian memory without making restrictive assumptions about weak or strong couplings or separability of vibrational and electronic degrees of freedom. With increasing system size the exact solution of the open quantum system dynamics requires memory and compute resources beyond a single compute node. To overcome this barrier, we developed a scalable variant of HEOM. Our distributed memory HEOM, DM-HEOM, is a universal tool for open quantum system dynamics. It is used to accurately compute all experimentally accessible time- and frequency resolved processes in light harvesting molecular complexes with arbitrary system-environment couplings for a wide range of temperatures and complex sizes.
Energy flow in the Photosystem I supercomplex: comparison of approximative theories with DM-HEOM
(2018)
We analyze the exciton dynamics in PhotosystemI from Thermosynechococcus elongatus using the distributed memory implementation of the hierarchical equation of motion (DM-HEOM) for the 96 Chlorophylls in the monomeric unit. The exciton-system parameters are taken from a first principles calculation. A comparison of the exact results with Foerster rates and Markovian approximations allows one to validate the exciton transfer times within the complex and to identify deviations from approximative theories. We show the optical absorption, linear, and circular dichroism spectra obtained with DM-HEOM and compare them to experimental results.
The effectiveness of ahead-of-time compiler optimization heavily depends on the amount of available information at compile time. Input-specific information that is only available at runtime cannot be used, although it often determines loop counts, branching predicates and paths, as well as memory-access patterns. It can also be crucial for generating efficient SIMD-vectorized code. This is especially relevant for the many-core architectures paving the way to exascale computing, which are more sensitive to code-optimization. We explore the design-space for using input-specific information at compile-time and present KART, a C++ library solution that allows developers tocompile, link, and execute code (e.g., C, C++ , Fortran) at application runtime. Besides mere runtime compilation of performance-critical code, KART can be used to instantiate the same code multiple times using different inputs, compilers, and options. Other techniques like auto-tuning and code-generation can be integrated into a KART-enabled application instead of being scripted around it. We evaluate runtimes and compilation costs for different synthetic kernels, and show the effectiveness for two real-world applications, HEOM and a WSM6 proxy.