Refine
Document Type
- Report (2)
- Doctoral Thesis (1)
Language
- English (3)
Has Fulltext
- yes (3)
Keywords
- Computersimulation (2)
- Numerische Strömungssimulation (2)
- code generation (2)
- ARM (1)
- Codegenerierung (1)
- Computational Steering (1)
- Domänenspezifische Programmiersprache (1)
- Hochleistungsrechnen (1)
- Interaktivität (1)
- Mehrgitterverfahren (1)
Institute
In this work, we introduce the framework for visualization and interactivity for physics engines in real-time, for short VIPER. It is able to execute various physical simulations, visualize the simulation results in real-time and offer computational steering. Especially interesting in this context are simulations running on remotely accessible HPC clusters. As an example, we present a particulate flow simulation consisting of a coupled rigid body and CFD simulation, the chosen visualization strategy and steering possibilities. Additionally, performance evaluations and a performance prediction model concerning the update rate for remote simulations in the context of the VIPER framework are given.
Many applications in scientific computing require solving one or more partial differential equations
(PDEs). For this task, solvers from the class of multigrid methods are known to be amongst the most efficient. An optimal implementation, however, is highly dependent on the specific problem as well as the target hardware. As energy efficiency is a big topic in today's computing centers, energy-efficient platforms such as ARM-based clusters are actively researched. In this work, we present a domain-specific approach, starting with the problem formulation in a domain-specific language (DSL), down to code generation targeting a variety of systems including embedded architectures. Furthermore, we present an approach to simulate embedded architectures to achieve an optimal hardware/software co-design, i.e., an optimal composition of software and hardware modifications. In this context, we use a virtual environment (OVP) that enables the adaptation of multicore models and their simulation in an efficient way. Our approach shows that execution time prediction for ARM-based platforms is possible and feasible but has to be enhanced with more detailed cache and memory models. We substantiate our claims by providing results for the performance prediction of geometric multigrid solvers generated by the ExaStencils framework.
Automatic Code Generation for Massively Parallel Applications in Computational Fluid Dynamics
(2019)
Solving partial differential equations (PDEs) is a fundamental challenge in many application domains in industry and academia alike. With increasingly large problems, efficient and highly scalable implementations become more and more crucial. Today, facing this challenge is more difficult than ever due to the increasingly heterogeneous hardware landscape. One promising approach is developing domain‐specific languages (DSLs) for a set of applications. Using code generation techniques then allows targeting a range of hardware platforms while concurrently applying domain‐specific optimizations in an automated fashion. The present work aims to further the state of the art in this field. As domain, we choose PDE solvers and, in particular, those from the group of geometric multigrid methods. To avoid having a focus too broad, we restrict ourselves to methods working on structured and patch‐structured grids.
We face the challenge of handling a domain as complex as ours, while providing different abstractions for diverse user groups, by splitting our external DSL ExaSlang into multiple layers, each specifying different aspects of the final application. Layer 1 is designed to resemble LaTeX and allows inputting continuous equations and functions. Their discretization is expressed on layer 2. It is complemented by algorithmic components which can be implemented in a Matlab‐like syntax on layer 3. All information provided to this point is summarized on layer 4, enriched with particulars about data structures and the employed parallelization. Additionally, we support automated progression between the different layers. All ExaSlang input is processed by our jointly developed Scala code generation framework to ultimately emit C++ code. We particularly focus on how to generate applications parallelized with, e.g., MPI and OpenMP that are able to run on workstations and large‐scale cluster alike.
We showcase the applicability of our approach by implementing simple test problems, like Poisson’s equation, as well as relevant applications from the field of computational fluid dynamics (CFD). In particular, we implement scalable solvers for the Stokes, Navier‐Stokes and shallow water equations (SWE) discretized using finite differences (FD) and finite volumes (FV). For the case of Navier‐Stokes, we also extend our implementation towards non‐uniform grids, thereby enabling static mesh refinement, and advanced effects such as the simulated fluid being non‐Newtonian and non‐isothermal.