TY - JOUR A1 - Heinze, Rieke A1 - Dipankar, Anurag A1 - Henken, Cintia Carbajal A1 - Moseley, Christopher A1 - Sourdeval, Odran A1 - Trömel, Silke A1 - Xie, Xinxin A1 - Adamidis, Panos A1 - Ament, Felix A1 - Baars, Holger A1 - Barthlott, Christian A1 - Behrendt, Andreas A1 - Blahak, Ulrich A1 - Bley, Sebastian A1 - Brdar, Slavko A1 - Brueck, Matthias A1 - Crewell, Susanne A1 - Deneke, Hartwig A1 - Di Girolamo, Paolo A1 - Evaristo, Raquel A1 - Fischer, Jürgen A1 - Frank, Christopher A1 - Friederichs, Petra A1 - Göcke, Tobias A1 - Gorges, Ksenia A1 - Hande, Luke A1 - Hanke, Moritz A1 - Hansen, Akio A1 - Hege, Hans-Christian A1 - Hose, Corinna A1 - Jahns, Thomas A1 - Kalthoff, Norbert A1 - Klocke, Daniel A1 - Kneifel, Stefan A1 - Knippertz, Peter A1 - Kuhn, Alexander A1 - van Laar, Thriza A1 - Macke, Andreas A1 - Maurer, Vera A1 - Mayer, Bernhard A1 - Meyer, Catrin I. A1 - Muppa, Shravan K. A1 - Neggers, Roeland A. J. A1 - Orlandi, Emiliano A1 - Pantillon, Florian A1 - Pospichal, Bernhard A1 - Röber, Niklas A1 - Scheck, Leonhard A1 - Seifert, Axel A1 - Seifert, Patric A1 - Senf, Fabian A1 - Siligam, Pavan A1 - Simmer, Clemens A1 - Steinke, Sandra A1 - Stevens, Bjorn A1 - Wapler, Kathrin A1 - Weniger, Michael A1 - Wulfmeyer, Volker A1 - Zängl, Günther A1 - Zhang, Dan A1 - Quaas, Johannes T1 - Large-eddy simulations over Germany using ICON: a comprehensive evaluation JF - Quarterly Journal of the Royal Meteorological Society N2 - Large-eddy simulations (LES) with the new ICOsahedral Non-hydrostatic atmosphere model (ICON) covering Germany are evaluated for four days in spring 2013 using observational data from various sources. Reference simulations with the established Consortium for Small-scale Modelling (COSMO) numerical weather prediction model and further standard LES codes are performed and used as a reference. This comprehensive evaluation approach covers multiple parameters and scales, focusing on boundary-layer variables, clouds and precipitation. The evaluation points to the need to work on parametrizations influencing the surface energy balance, and possibly on ice cloud microphysics. The central purpose for the development and application of ICON in the LES configuration is the use of simulation results to improve the understanding of moist processes, as well as their parametrization in climate models. The evaluation thus aims at building confidence in the model's ability to simulate small- to mesoscale variability in turbulence, clouds and precipitation. The results are encouraging: the high-resolution model matches the observed variability much better at small- to mesoscales than the coarser resolved reference model. In its highest grid resolution, the simulated turbulence profiles are realistic and column water vapour matches the observed temporal variability at short time-scales. Despite being somewhat too large and too frequent, small cumulus clouds are well represented in comparison with satellite data, as is the shape of the cloud size spectrum. Variability of cloud water matches the satellite observations much better in ICON than in the reference model. In this sense, it is concluded that the model is fit for the purpose of using its output for parametrization development, despite the potential to improve further some important aspects of processes that are also parametrized in the high-resolution model. Y1 - 2017 U6 - https://doi.org/10.1002/qj.2947 VL - 143 IS - 702 SP - 69 EP - 100 ER - TY - CHAP A1 - Noack, Matthias A1 - Wende, Florian A1 - Zitzlsberger, Georg A1 - Klemm, Michael A1 - Steinke, Thomas T1 - KART - A Runtime Compilation Library for Improving HPC Application Performance T2 - High Performance Computing: ISC High Performance 2017 International Workshops, DRBSD, ExaComm, HCPM, HPC-IODC, IWOPH, IXPUG, P^3MA, VHPC, Visualization at Scale, WOPSSS, Frankfurt, Germany, June 18-22, 2017, Revised Selected Papers N2 - The effectiveness of ahead-of-time compiler optimization heavily depends on the amount of available information at compile time. Input-specific information that is only available at runtime cannot be used, although it often determines loop counts, branching predicates and paths, as well as memory-access patterns. It can also be crucial for generating efficient SIMD-vectorized code. This is especially relevant for the many-core architectures paving the way to exascale computing, which are more sensitive to code-optimization. We explore the design-space for using input-specific information at compile-time and present KART, a C++ library solution that allows developers tocompile, link, and execute code (e.g., C, C++ , Fortran) at application runtime. Besides mere runtime compilation of performance-critical code, KART can be used to instantiate the same code multiple times using different inputs, compilers, and options. Other techniques like auto-tuning and code-generation can be integrated into a KART-enabled application instead of being scripted around it. We evaluate runtimes and compilation costs for different synthetic kernels, and show the effectiveness for two real-world applications, HEOM and a WSM6 proxy. Y1 - 2017 U6 - https://doi.org/10.1007/978-3-319-67630-2_29 N1 - Best Paper Award VL - 10524 SP - 389 EP - 403 PB - Springer International Publishing ER - TY - JOUR A1 - Knoop, Helge A1 - Gronemeier, Tobias A1 - Sühring, Matthias A1 - Steinbach, Peter A1 - Noack, Matthias A1 - Wende, Florian A1 - Steinke, Thomas A1 - Knigge, Christoph A1 - Raasch, Siegfried A1 - Ketelsen, Klaus T1 - Porting the MPI-parallelized LES model PALM to multi-GPU systems and many integrated core processors: an experience report JF - International Journal of Computational Science and Engineering. Special Issue on: Novel Strategies for Programming Accelerators N2 - The computational power and availability of graphics processing units (GPUs), such as the Nvidia Tesla, and Many Integrated Core (MIC) processors, such as the Intel Xeon Phi, on high performance computing (HPC) systems is rapidly evolving. However, HPC applications need to be ported to take advantage of such hardware. This paper is a report on our experience of porting the MPI+OpenMP parallelised large-eddy simulation model (PALM) to multi-GPU as well as to MIC processor environments using the directive-based high level programming paradigm OpenACC and OpenMP, respectively. PALM is a Fortran-based computational fluid dynamics software package, used for the simulation of atmospheric and oceanic boundary layers to answer questions linked to fundamental atmospheric turbulence research, urban modelling, aircraft safety and cloud physics. Development of PALM started in 1997, the project currently entails 140 kLOC and is used on HPC farms of up to 43,200 cores. The main challenges we faced during the porting process are the size and complexity of the PALM code base, its inconsistent modularisation and the complete lack of a unit-test suite. We report the methods used to identify performance issues as well as our experiences with state-of-the-art profiling tools. Moreover, we outline the required porting steps in order to properly execute our code on GPUs and MIC processors, describe the problems and bottlenecks that we encountered during the porting process, and present separate performance tests for both architectures. These performance tests, however, do not provide any benchmark information that compares the performance of the ported code between the two architectures. Y1 - 2017 PB - Inderscience ET - Special Issue on: Novel Strategies for Programming Accelerators ER -