TY - GEN A1 - Noack, Matthias A1 - Wende, Florian A1 - Zitzlsberger, Georg A1 - Klemm, Michael A1 - Steinke, Thomas T1 - KART – A Runtime Compilation Library for Improving HPC Application Performance N2 - The effectiveness of ahead-of-time compiler optimization heavily depends on the amount of available information at compile time. Input-specific information that is only available at runtime cannot be used, although it often determines loop counts, branching predicates and paths, as well as memory-access patterns. It can also be crucial for generating efficient SIMD-vectorized code. This is especially relevant for the many-core architectures paving the way to exascale computing, which are more sensitive to code-optimization. We explore the design-space for using input-specific information at compile-time and present KART, a C++ library solution that allows developers to compile, link, and execute code (e.g., C, C++ , Fortran) at application runtime. Besides mere runtime compilation of performance-critical code, KART can be used to instantiate the same code multiple times using different inputs, compilers, and options. Other techniques like auto-tuning and code-generation can be integrated into a KART-enabled application instead of being scripted around it. We evaluate runtimes and compilation costs for different synthetic kernels, and show the effectiveness for two real-world applications, HEOM and a WSM6 proxy. T3 - ZIB-Report - 16-48 Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-60730 SN - 1438-0064 ER - TY - CHAP A1 - Wende, Florian A1 - Noack, Matthias A1 - Steinke, Thomas A1 - Klemm, Michael A1 - Zitzlsberger, Georg A1 - Newburn, Chris J. ED - Dutot, Pierre-Francois ED - Trystram, Denis T1 - Portable SIMD Performance with OpenMP* 4.x Compiler Directives N2 - Effective vectorization is becoming increasingly important for high performance and energy efficiency on processors with wide SIMD units. Compilers often require programmers to identify opportunities for vectorization, using directives to disprove data dependences. The OpenMP 4.x SIMD directives strive to provide portability. We investigate the ability of current compilers (GNU, Clang, and Intel) to generate SIMD code for microbenchmarks that cover common patterns in scientific codes and for two kernels from the VASP and the MOM5/ERGOM application. We explore coding strategies for improving SIMD performance across different compilers and platforms (Intel® Xeon® processor and Intel® Xeon Phi™ (co)processor). We compare OpenMP* 4.x SIMD vectorization with and without vector data types against SIMD intrinsics and C++ SIMD types. Our experiments show that in many cases portable performance can be achieved. All microbenchmarks are available as open source as a reference for programmers and compiler experts to enhance SIMD code generation. Y1 - 2016 SN - 978-3-319-43659-3 U6 - https://doi.org/10.1007/978-3-319-43659-3_20 VL - Euro-Par 2016: Parallel Processing: 22nd International Conference on Parallel and Distributed Computing PB - Springer International Publishing ER -