• Deutsch
Login

  • Home
  • Search
  • Browse
  • Publish
  • FAQ

Refine

Author

  • Wende, Florian (29)
  • Steinke, Thomas (19)
  • Noack, Matthias (7)
  • Cordes, Frank (4)
  • Klemm, Michael (4)
  • Marsman, Martijn (4)
  • Kim, Jeongnim (3)
  • Reinefeld, Alexander (3)
  • Weiser, Martin (3)
  • Zhao, Zhengji (3)
+ more

Year of publication

  • 2021 (1)
  • 2020 (1)
  • 2019 (2)
  • 2018 (1)
  • 2017 (4)
  • 2016 (4)
  • 2015 (5)
  • 2014 (6)
  • 2013 (3)
  • 2012 (1)
+ more

Document Type

  • In Proceedings (13)
  • ZIB-Report (7)
  • Article (4)
  • Book chapter (2)
  • Bachelor's Thesis (1)
  • In Collection (1)
  • Master's Thesis (1)

Language

  • English (29)

Has Fulltext

  • no (21)
  • yes (8)

Is part of the Bibliography

  • no (29)

Keywords

  • GPGPU (2)
  • Concurrent Kernel Execution (1)
  • Connected Component Labeling (1)
  • Fault-tolerance (1)
  • Hyper-Q (1)
  • Ising Model (1)
  • Oversubscription (1)
  • Performance and usage measurement (1)
  • Process placement (1)
  • Swendsen-Wang Multi-Cluster Algorithm (1)
+ more

Institute

  • Distributed Algorithms and Supercomputing (28)
  • Modeling and Simulation of Complex Processes (3)
  • Numerical Mathematics (3)
  • Mathematics for Life and Materials Science (1)
  • Parallel and Distributed Computing (1)

29 search hits

  • 1 to 10
  • BibTeX
  • CSV
  • RIS
  • XML
  • 10
  • 20
  • 50
  • 100

Sort by

  • Year
  • Year
  • Title
  • Title
  • Author
  • Author
Swendsen-Wang Multi-Cluster Algorithm for the 2D/3D Ising Model on Xeon Phi and GPU (2013)
Wende, Florian ; Steinke, Thomas
Simulations of the critical Ising model by means of local update algorithms suffer from critical slowing down. One way to partially compensate for the influence of this phenomenon on the runtime of simulations is using increasingly faster and parallel computer hardware. Another approach is using algorithms that do not suffer from critical slowing down, such as cluster algorithms. This paper reports on the Swendsen-Wang multi-cluster algorithm on Intel Xeon Phi coprocessor 5110P, Nvidia Tesla M2090 GPU, and x86 multi-core CPU. We present shared memory versions of the said algorithm for the simulation of the two- and three-dimensional Ising model. We use a combination of local cluster search and global label reduction by means of atomic hardware primitives. Further, we describe an MPI version of the algorithm on Xeon Phi and CPU, respectively. Significant performance improvements over known im plementations of the Swendsen-Wang algorithm are demonstrated.
Simulation of Spin Models on Nvidia Graphics Cards using CUDA (2010)
Wende, Florian
This thesis reports on simulating spin models on Nvidia graphics cards using the CUDA programming model; a particular approach for making GPGPU (General Purpose Computation on Graphics Processing Units) available for a wide range of software developers not necessarily acquainted with (massively) parallel programming. By comparing program execution times for simulations of the Ising model and the Ising spin glass by means of the Metropolis algorithm on Nvidia Tesla C1060 graphics cards and an Intel Core i7-920 quad-core x86 CPU (we used OpenMP to make our simulations run on all 4 execution units of the CPU), we noticed that the Tesla C1060 performed about a factor 5-10 faster than the Core i7-920, depending on the particular model and the accuracy of the calculations (32-bit or 64-bit). We also investigated the reliability of GPGPU computations, especially with respect to the occurrence of soft-errors as suggested in [23]. We noticed faulty program outputs during long-time simulations of the Ising model on ''large'' lattices. We were able to link these problems to overheating of the corresponding graphics cards. Doing Monte Carlo simulations on parallel computer architectures, as was the case in this thesis, suggests to also generate random numbers in a parallel manner. We present implementations of the random number generators Ranlux and Mersenne Twister. In addition, we give an alternative and very efficient approach for producing parallel random numbers on Nvidia graphics cards. We successfully tested all random number generators used in this thesis for their quality by comparing Monte Carlo estimates against exact calculations.
Dynamic Load Balancing on Massively Parallel Computer Architectures (2013)
Wende, Florian
This thesis reports on using dynamic load balancing methods on massively parallel computers in the context of multithreaded computations. In particular we investigate the applicability of a randomized work stealing algorithm to ray tracing and breadth-first search as representatives of real-world applications with dynamic work creation. For our considerations we made use of current massively parallel hardware accelerators: Nvidia Tesla M2090, and Intel Xeon Phi. For both of the two we demonstrate the suitability of the work stealing scheme for the said real-world applications. Also the necessity of dynamic load balancing for irregular computations on such hardware is illustrated.
On Improving the Performance of Multi-threaded CUDA Applications with Concurrent Kernel Execution by Kernel Reordering (2012)
Wende, Florian ; Cordes, Frank ; Steinke, Thomas
The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing (2015)
Wende, Florian ; Steinke, Thomas ; Reinefeld, Alexander
With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints during the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hardware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance. The latter might be only (partially) beneficial when placing multiple applications with different computational characteristics on the same node.
Multi-threaded Kernel Offloading to GPGPU Using Hyper-Q on Kepler Architecture (2014)
Wende, Florian ; Steinke, Thomas ; Cordes, Frank
Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution. We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each. By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture.
Portable SIMD Performance with OpenMP* 4.x Compiler Directives (2016)
Wende, Florian ; Noack, Matthias ; Steinke, Thomas ; Klemm, Michael ; Zitzlsberger, Georg ; Newburn, Chris J.
Effective vectorization is becoming increasingly important for high performance and energy efficiency on processors with wide SIMD units. Compilers often require programmers to identify opportunities for vectorization, using directives to disprove data dependences. The OpenMP 4.x SIMD directives strive to provide portability. We investigate the ability of current compilers (GNU, Clang, and Intel) to generate SIMD code for microbenchmarks that cover common patterns in scientific codes and for two kernels from the VASP and the MOM5/ERGOM application. We explore coding strategies for improving SIMD performance across different compilers and platforms (Intel® Xeon® processor and Intel® Xeon Phi™ (co)processor). We compare OpenMP* 4.x SIMD vectorization with and without vector data types against SIMD intrinsics and C++ SIMD types. Our experiments show that in many cases portable performance can be achieved. All microbenchmarks are available as open source as a reference for programmers and compiler experts to enhance SIMD code generation.
KART – A Runtime Compilation Library for Improving HPC Application Performance (2016)
Noack, Matthias ; Wende, Florian ; Zitzlsberger, Georg ; Klemm, Michael ; Steinke, Thomas
The effectiveness of ahead-of-time compiler optimization heavily depends on the amount of available information at compile time. Input-specific information that is only available at runtime cannot be used, although it often determines loop counts, branching predicates and paths, as well as memory-access patterns. It can also be crucial for generating efficient SIMD-vectorized code. This is especially relevant for the many-core architectures paving the way to exascale computing, which are more sensitive to code-optimization. We explore the design-space for using input-specific information at compile-time and present KART, a C++ library solution that allows developers to compile, link, and execute code (e.g., C, C++ , Fortran) at application runtime. Besides mere runtime compilation of performance-critical code, KART can be used to instantiate the same code multiple times using different inputs, compilers, and options. Other techniques like auto-tuning and code-generation can be integrated into a KART-enabled application instead of being scripted around it. We evaluate runtimes and compilation costs for different synthetic kernels, and show the effectiveness for two real-world applications, HEOM and a WSM6 proxy.
Dynamic SIMD Vector Lane Scheduling (2016)
Krzikalla, Olaf ; Wende, Florian ; Höhnerbach, Markus
A classical technique to vectorize code that contains control flow is a control-flow to data-flow conversion. In that approach statements are augmented with masks that denote whether a given vector lane participates in the statement’s execution or idles. If the scheduling of work to vector lanes is performed statically, then some of the vector lanes will run idle in case of control flow divergences or varying work intensities across the loop iterations. With an increasing number of vector lanes, the likelihood of divergences or heavily unbalanced work assignments increases and static scheduling leads to a poor resource utilization. In this paper, we investigate different approaches to dynamic SIMD vector lane scheduling using the Mandelbrot set algorithm as a test case. To overcome the limitations of static scheduling, idle vector lanes are assigned work items dynamically, thereby minimizing per-lane idle cycles. Our evaluation on the Knights Corner and Knights Landing platform shows, that our approaches can lead to considerable performance gains over a static work assignment. By using the AVX-512 vector compress and expand instruction, we are able to further improve the scheduling.
Integration of Intel Xeon Phi Servers into the HLRN-III Complex: Experiences, Performance and Lessons Learned (2014)
Wende, Florian ; Laubender, Guido ; Steinke, Thomas
The third generation of the North German Supercomputing Alliance (HLRN) compute and storage facilities comprises a Cray XC30 architecture with exclusively Intel Ivy Bridge compute nodes. In the second phase, scheduled for November 2014, the HLRN-III configuration will undergo a substantial upgrade together with the option of integrating accelerator nodes into the system. To support the decision-making process, a four-node Intel Xeon Phi cluster is integrated into the present HLRN-III infrastructure at ZIB. This integration includes user/project management, file system access and job management via the HLRN-III batch system. For selected workloads, in-depth analysis, migration and optimization work on Xeon Phi is in progress. We will report our experiences and lessons learned within the Xeon Phi installation and integration process. For selected examples, initial results of the application evaluation on the Xeon Phi cluster platform will be discussed.
  • 1 to 10

OPUS4 Logo

  • Contact
  • Impressum und Datenschutz
  • Sitelinks