TY - JOUR A1 - Ziegler, Alexander A1 - Ogurreck, Malte A1 - Steinke, Thomas A1 - Beckmann, Felix A1 - Prohaska, Steffen A1 - Ziegler, Andreas T1 - Opportunities and challenges for digital morphology JF - Biology Direct Y1 - 2010 U6 - https://doi.org/10.1186/1745-6150-5-45 VL - 5 IS - 1 SP - 45 ER - TY - CHAP A1 - Wende, Florian A1 - Steinke, Thomas A1 - Reinefeld, Alexander ED - Gray, A. ED - Smith, L. ED - Weiland, M. T1 - The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing T2 - Proceedings of the 3rd International Conference on Exascale Applications and Software, EASC 2015 Y1 - 2015 SN - 978 -0-9 926615 -1-9 SP - 13 EP - 18 PB - The University of Edinburgh ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Reinefeld, Alexander T1 - The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing N2 - With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints during the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hardware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance. The latter might be only (partially) beneficial when placing multiple applications with different computational characteristics on the same node. T3 - ZIB-Report - 15-05 KW - Fault-tolerance KW - Process placement KW - Oversubscription Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-53560 SN - 1438-0064 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Klemm, Michael A1 - Reinefeld, Alexander ED - Reinders, James ED - Jeffers, Jim T1 - Concurrent Kernel Offloading T2 - High Performance Parallelism Pearls Y1 - 2014 UR - http://store.elsevier.com/High-Performance-Parallelism-Pearls/James-Reinders/isbn-9780128021187/ SN - 978-0128021187 N1 - Target publication date: Nov, 2014 PB - Morgan Kaufman, Elsevier ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Cordes, Frank T1 - Multi-threaded Kernel Offloading to GPGPU Using Hyper-Q on Kepler Architecture N2 - Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution. We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each. By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture. T3 - ZIB-Report - 14-19 KW - GPGPU KW - Hyper-Q KW - Concurrent Kernel Execution Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-50362 SN - 1438-0064 ER - TY - CHAP A1 - Wende, Florian A1 - Steinke, Thomas T1 - Swendsen-Wang Multi-Cluster Algorithm for the 2D/3D Ising Model on Xeon Phi and GPU T2 - Proceeding SC '13 Proceedings of SC13: International Conference for High Performance Computing, Networking, Storage and Analysis Article No. 83 ACM New York, NY, USA, 2013 Y1 - 2013 U6 - https://doi.org/http://dx.doi.org/10.1145/2503210.2503254 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas T1 - Swendsen-Wang Multi-Cluster Algorithm for the 2D/3D Ising Model on Xeon Phi and GPU N2 - Simulations of the critical Ising model by means of local update algorithms suffer from critical slowing down. One way to partially compensate for the influence of this phenomenon on the runtime of simulations is using increasingly faster and parallel computer hardware. Another approach is using algorithms that do not suffer from critical slowing down, such as cluster algorithms. This paper reports on the Swendsen-Wang multi-cluster algorithm on Intel Xeon Phi coprocessor 5110P, Nvidia Tesla M2090 GPU, and x86 multi-core CPU. We present shared memory versions of the said algorithm for the simulation of the two- and three-dimensional Ising model. We use a combination of local cluster search and global label reduction by means of atomic hardware primitives. Further, we describe an MPI version of the algorithm on Xeon Phi and CPU, respectively. Significant performance improvements over known im plementations of the Swendsen-Wang algorithm are demonstrated. T3 - ZIB-Report - 13-44 KW - Swendsen-Wang Multi-Cluster Algorithm KW - Ising Model KW - Xeon Phi KW - GPGPU KW - Connected Component Labeling Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42187 SN - 1438-0064 ER - TY - CHAP A1 - Wende, Florian A1 - Noack, Matthias A1 - Steinke, Thomas A1 - Klemm, Michael A1 - Zitzlsberger, Georg A1 - Newburn, Chris J. ED - Dutot, Pierre-Francois ED - Trystram, Denis T1 - Portable SIMD Performance with OpenMP* 4.x Compiler Directives N2 - Effective vectorization is becoming increasingly important for high performance and energy efficiency on processors with wide SIMD units. Compilers often require programmers to identify opportunities for vectorization, using directives to disprove data dependences. The OpenMP 4.x SIMD directives strive to provide portability. We investigate the ability of current compilers (GNU, Clang, and Intel) to generate SIMD code for microbenchmarks that cover common patterns in scientific codes and for two kernels from the VASP and the MOM5/ERGOM application. We explore coding strategies for improving SIMD performance across different compilers and platforms (Intel® Xeon® processor and Intel® Xeon Phi™ (co)processor). We compare OpenMP* 4.x SIMD vectorization with and without vector data types against SIMD intrinsics and C++ SIMD types. Our experiments show that in many cases portable performance can be achieved. All microbenchmarks are available as open source as a reference for programmers and compiler experts to enhance SIMD code generation. Y1 - 2016 SN - 978-3-319-43659-3 U6 - https://doi.org/10.1007/978-3-319-43659-3_20 VL - Euro-Par 2016: Parallel Processing: 22nd International Conference on Parallel and Distributed Computing PB - Springer International Publishing ER - TY - CHAP A1 - Wende, Florian A1 - Noack, Matthias A1 - Schütt, Thorsten A1 - Sachs, Stephen A1 - Steinke, Thomas T1 - Application Performance on a Cray XC30 Evaluation System with Xeon Phi Coprocessors at HLRN-III T2 - Cray User Group Y1 - 2015 ER - TY - CHAP A1 - Wende, Florian A1 - Marsman, Martijn A1 - Steinke, Thomas T1 - On Enhancing 3D-FFT Performance in VASP T2 - CUG Proceedings Y1 - 2016 ER - TY - JOUR A1 - Wende, Florian A1 - Marsman, Martijn A1 - Kim, Jeongnim A1 - Vasilev, Fedor A1 - Zhao, Zhengji A1 - Steinke, Thomas T1 - OpenMP in VASP: Threading and SIMD JF - International Journal of Quantum Chemistry N2 - The Vienna Ab initio Simulation Package (VASP) is a widely used electronic structure code that originally exploits process-level parallelism through the Message Passing Interface (MPI) for work distribution within and across nodes. Architectural changes of modern parallel processors urge programmers to address thread- and data-level parallelism as well to benefit most from the available compute resources within a node. We describe for VASP how to approach for an MPI + OpenMP parallelization including data-level parallelism through OpenMP SIMD constructs together with a generic high-level vector coding scheme. We can demonstrate an improved scalability of VASP and more than 20% gain over the MPI-only version, as well as a 2x increased performance of collective operations using the multiple-endpoint MPI feature. The high-level vector coding scheme applied to VASP's general gradient approximation routine gives up 9x performance gain on AVX512 platforms with the Intel compiler. Y1 - 2018 U6 - https://doi.org/10.1002/qua.25851 IS - Emerging Architectures in Computational Chemistry SP - e25851 PB - Wiley Online Library ER - TY - CHAP A1 - Wende, Florian A1 - Laubender, Guido A1 - Steinke, Thomas T1 - Integration of Intel Xeon Phi Servers into the HLRN-III Complex: Experiences, Performance and Lessons Learned T2 - CUG2014 Proceedings Y1 - 2014 ER - TY - GEN A1 - Wende, Florian A1 - Laubender, Guido A1 - Steinke, Thomas T1 - Integration of Intel Xeon Phi Servers into the HLRN-III Complex: Experiences, Performance and Lessons Learned N2 - The third generation of the North German Supercomputing Alliance (HLRN) compute and storage facilities comprises a Cray XC30 architecture with exclusively Intel Ivy Bridge compute nodes. In the second phase, scheduled for November 2014, the HLRN-III configuration will undergo a substantial upgrade together with the option of integrating accelerator nodes into the system. To support the decision-making process, a four-node Intel Xeon Phi cluster is integrated into the present HLRN-III infrastructure at ZIB. This integration includes user/project management, file system access and job management via the HLRN-III batch system. For selected workloads, in-depth analysis, migration and optimization work on Xeon Phi is in progress. We will report our experiences and lessons learned within the Xeon Phi installation and integration process. For selected examples, initial results of the application evaluation on the Xeon Phi cluster platform will be discussed. T3 - ZIB-Report - 14-15 KW - Performance and usage measurement KW - System management KW - System integration KW - Xeon Phi cluster Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-49990 UR - https://cug.org/proceedings/cug2014_proceedings/includes/files/pap194-file2.pdf SN - 1438-0064 ER - TY - CHAP A1 - Wende, Florian A1 - Cordes, Frank A1 - Steinke, Thomas T1 - Concurrent Kernel Execution on Xeon Phi within Parallel Heterogeneous Workloads T2 - Euro-Par 2014: Parallel Processing. 20th International Conference, Porto, Portugal, August 25-29, 2014, Proceedings Y1 - 2014 UR - http://www.springer.com/computer/swe/book/978-3-319-09872-2 U6 - https://doi.org/10.1007/978-3-319-09873-9_66 VL - 8632 SP - 788 EP - 799 ER - TY - CHAP A1 - Wende, Florian A1 - Cordes, Frank A1 - Steinke, Thomas T1 - On Improving the Performance of Multi-threaded CUDA Applications with Concurrent Kernel Execution by Kernel Reordering T2 - Application Accelerators in High Performance Computing (SAAHPC), 2012 Symposium on Y1 - 2012 U6 - https://doi.org/10.1109/SAAHPC.2012.12 SP - 74 EP - 83 ER - TY - CHAP A1 - Weinhold, Carsten A1 - Lackorzynski, Adam A1 - Bierbaum, Jan A1 - Küttler, Martin A1 - Planeta, Maksym A1 - Härtig, Hermann A1 - Shiloh, Amnon A1 - Levy, Ely A1 - Ben-Nun, Tal A1 - Barak, Amnon A1 - Steinke, Thomas A1 - Schütt, Thorsten A1 - Fajerski, Jan A1 - Reinefeld, Alexander A1 - Lieber, Matthias A1 - Nagel, Wolfgang T1 - FFMK: A Fast and Fault-tolerant Microkernel-based System for Exascale Computing T2 - SPPEXA Symposium 2016 Y1 - 2016 U6 - https://doi.org/10.1007/978-3-319-40528-5_18 ER - TY - JOUR A1 - Trißl, Silke A1 - Rother, Kristian A1 - Müller, Heiko A1 - Steinke, Thomas A1 - Koch, Ina A1 - Preissner, Robert A1 - Frömmel, Cornelius A1 - Leser, Ulf T1 - Columba: an integrated database of proteins, structures, and annotations JF - BMC Bioinformatics Y1 - 2005 U6 - https://doi.org/10.1186/1471-2105-6-81 VL - 6 SP - 81 EP - 92 ER - TY - CHAP A1 - Steinke, Thomas A1 - Reinefeld, Alexander T1 - Experiences with High-Level Programming of FPGAs on Cray XD1 T2 - CUG Proceedings Y1 - 2006 ER - TY - CHAP A1 - Steinke, Thomas A1 - Peter, Kathrin A1 - Borchert, Sebastian T1 - Efficiency Considerations of Cauchy Reed-Solomon Implementations on Accelerator and Multi-Core Platforms T2 - Symposium on Application Accelerators in High Performance Computing (SAAHPC) Y1 - 2010 UR - http://saahpc.ncsa.illinois.edu/10/papers/paper_12.pdf CY - Knoxville, USA ER - TY - GEN A1 - Stalling, Detlev A1 - Steinke, Thomas T1 - Visualization of Vector Fields in Quantum Chemistry N2 - \small Many interesting phenomena in molecular systems like interactions between macro-molecules, protein-substrate docking, or channeling processes in membranes are gouverned to a high degree by classical Coulomb or van-der-Waals forces. The visualization of these force fields is important for verifying numerical simulations. Moreover, by inspecting the forces visually we can gain deeper insight into the molecular processes. Up to now the visualization of vector fields is quite unusual in computational chemistry. In fact many commercial software packages do not support this topic at all. The reason is not that vector fields are considered unimportant, but mainly because of the lack of adequate visualization methods. In this paper we survey a number of methods for vector field visualization, ranging from well-known concepts like arrow or streamline plots to more advanced techniques like line integral convolution, and show how these can be applied to computational chemistry. A combination of the most meaningful methods in an interactive 3D visualization environment can provide a powerful tool box for analysing simulations in molecular dynamics. T3 - ZIB-Report - SC-96-01 Y1 - 1996 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-2124 ER -