TY - THES A1 - Wende, Florian T1 - Dynamic Load Balancing on Massively Parallel Computer Architectures N2 - This thesis reports on using dynamic load balancing methods on massively parallel computers in the context of multithreaded computations. In particular we investigate the applicability of a randomized work stealing algorithm to ray tracing and breadth-first search as representatives of real-world applications with dynamic work creation. For our considerations we made use of current massively parallel hardware accelerators: Nvidia Tesla M2090, and Intel Xeon Phi. For both of the two we demonstrate the suitability of the work stealing scheme for the said real-world applications. Also the necessity of dynamic load balancing for irregular computations on such hardware is illustrated. N2 - Vorliegende Bachelorarbeit befasst sich mit Methoden der dynamischen Lastbalancierung auf massiv parallelen Computern im Rahmen von mehrprozess gestützten Ausführungen von Programmen. Im einzelnen wird die Eignung eines randomisierten Work-Stealing Algorithmus für die Ausführung realer Anwendungen mit dynamischer Arbeitserzeugung, wie Ray-Tracing und Breitensuche, untersucht. Für die entsprechenden Betrachtungen wer den aktuelle massiv parallele Hardwarebeschleuniger vom Typ Nvidia Tesla M2090 und Intel Xeon Phi verwendet. Für beide Beschleunigertypen konnte die Tauglichkeit des Work-Stealing Schemas für die genannten Anwendungen gezeigt werden. Ebenfalls wird die Notwendigkeit der Verwendung dynamischer Lastausgleichsmethoden für irreguläre Berechnungen auf der genannten Hardware verdeutlicht. Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42166 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas T1 - Swendsen-Wang Multi-Cluster Algorithm for the 2D/3D Ising Model on Xeon Phi and GPU N2 - Simulations of the critical Ising model by means of local update algorithms suffer from critical slowing down. One way to partially compensate for the influence of this phenomenon on the runtime of simulations is using increasingly faster and parallel computer hardware. Another approach is using algorithms that do not suffer from critical slowing down, such as cluster algorithms. This paper reports on the Swendsen-Wang multi-cluster algorithm on Intel Xeon Phi coprocessor 5110P, Nvidia Tesla M2090 GPU, and x86 multi-core CPU. We present shared memory versions of the said algorithm for the simulation of the two- and three-dimensional Ising model. We use a combination of local cluster search and global label reduction by means of atomic hardware primitives. Further, we describe an MPI version of the algorithm on Xeon Phi and CPU, respectively. Significant performance improvements over known im plementations of the Swendsen-Wang algorithm are demonstrated. T3 - ZIB-Report - 13-44 KW - Swendsen-Wang Multi-Cluster Algorithm KW - Ising Model KW - Xeon Phi KW - GPGPU KW - Connected Component Labeling Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42187 SN - 1438-0064 ER - TY - GEN A1 - Wende, Florian A1 - Laubender, Guido A1 - Steinke, Thomas T1 - Integration of Intel Xeon Phi Servers into the HLRN-III Complex: Experiences, Performance and Lessons Learned N2 - The third generation of the North German Supercomputing Alliance (HLRN) compute and storage facilities comprises a Cray XC30 architecture with exclusively Intel Ivy Bridge compute nodes. In the second phase, scheduled for November 2014, the HLRN-III configuration will undergo a substantial upgrade together with the option of integrating accelerator nodes into the system. To support the decision-making process, a four-node Intel Xeon Phi cluster is integrated into the present HLRN-III infrastructure at ZIB. This integration includes user/project management, file system access and job management via the HLRN-III batch system. For selected workloads, in-depth analysis, migration and optimization work on Xeon Phi is in progress. We will report our experiences and lessons learned within the Xeon Phi installation and integration process. For selected examples, initial results of the application evaluation on the Xeon Phi cluster platform will be discussed. T3 - ZIB-Report - 14-15 KW - Performance and usage measurement KW - System management KW - System integration KW - Xeon Phi cluster Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-49990 UR - https://cug.org/proceedings/cug2014_proceedings/includes/files/pap194-file2.pdf SN - 1438-0064 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas A1 - Cordes, Frank T1 - Multi-threaded Kernel Offloading to GPGPU Using Hyper-Q on Kepler Architecture N2 - Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution. We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each. By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture. T3 - ZIB-Report - 14-19 KW - GPGPU KW - Hyper-Q KW - Concurrent Kernel Execution Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-50362 SN - 1438-0064 ER - TY - GEN A1 - Wende, Florian T1 - SIMD Enabled Functions on Intel Xeon CPU and Intel Xeon Phi Coprocessor N2 - To achieve high floating point compute performance, modern processors draw on short vector SIMD units, as found e.g. in Intel CPUs (SSE, AVX1, AVX2 as well as AVX-512 on the roadmap) and the Intel Xeon Phi coprocessor, to operate an increasingly larger number of operands simultaneously. Making use of SIMD vector operations therefore is essential to get close to the processor’s floating point peak performance. Two approaches are typically used by programmers to utilize the vector units: compiler driven vectorization via directives and code annotations, and manual vectorization by means of SIMD intrinsic operations or assembly. In this paper, we investigate the capabilities of the current Intel compiler (version 15 and later) to generate vector code for non-trivial coding patterns within loops. Beside the more or less uniform data-parallel standard loops or loop nests, which are typical candidates for SIMDfication, the occurrence of e.g. (conditional) function calls including branching, and early returns from functions may pose difficulties regarding the effective use of vector operations. Recent improvements of the compiler's capabilities involve the generation of SIMD-enabled functions. We will study the effectiveness of the vector code generated by the compiler by comparing it against hand-coded intrinsics versions of different kinds of functions that are invoked within innermost loops. T3 - ZIB-Report - 15-17 Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-54163 SN - 1438-0064 ER - TY - THES A1 - Wende, Florian T1 - Simulation of Spin Models on Nvidia Graphics Cards using CUDA N2 - This thesis reports on simulating spin models on Nvidia graphics cards using the CUDA programming model; a particular approach for making GPGPU (General Purpose Computation on Graphics Processing Units) available for a wide range of software developers not necessarily acquainted with (massively) parallel programming. By comparing program execution times for simulations of the Ising model and the Ising spin glass by means of the Metropolis algorithm on Nvidia Tesla C1060 graphics cards and an Intel Core i7-920 quad-core x86 CPU (we used OpenMP to make our simulations run on all 4 execution units of the CPU), we noticed that the Tesla C1060 performed about a factor 5-10 faster than the Core i7-920, depending on the particular model and the accuracy of the calculations (32-bit or 64-bit). We also investigated the reliability of GPGPU computations, especially with respect to the occurrence of soft-errors as suggested in [23]. We noticed faulty program outputs during long-time simulations of the Ising model on ''large'' lattices. We were able to link these problems to overheating of the corresponding graphics cards. Doing Monte Carlo simulations on parallel computer architectures, as was the case in this thesis, suggests to also generate random numbers in a parallel manner. We present implementations of the random number generators Ranlux and Mersenne Twister. In addition, we give an alternative and very efficient approach for producing parallel random numbers on Nvidia graphics cards. We successfully tested all random number generators used in this thesis for their quality by comparing Monte Carlo estimates against exact calculations. N2 - Vorliegende Diplomarbeit befasst sich mit der Simulation von Spin-Modellen auf Nvidia Grafikkarten. Hierbei wird das Programmiermodell CUDA verwendet, welches einer breiten Masse von Softwareentwicklern den Zugang zu GPGPU (General Purpose Computation on Graphics Processing Units) gestattet ohne dass diese notwendigerweise mit (massiv) paralleler Programmierung vertraut sein müssen. Im Rahmen von Vergleichen der Programmlaufzeiten für Simulationen des Ising- Modells sowie des Ising-Spin-Glases auf Nvidia Tesla C1060 Grafikkarten und einer Intel Core i7-920 x86 Quad-Core-CPU war zu vermerken, dass, abhängig vom konkreten Modell und der Rechengenauigkeit (32-bit oder 64-bit), die Tesla C1060 zwischen 5-10 mal schneller arbeitete als die Core i7-920 Quad-Core-CPU. Wir haben uns ebenfalls mit der Zuverlässigkeit von GPGPU-Rechnungen befasst, gerade in Hinblick auf das Auftreten von Soft-Errors, wie es in [23] angedeutet wird. Wir beobachteten falsche Programmausgaben bei langen Simulationen des Ising-Modells auf ,,großen'''' Gittern. Es gelang uns die auftretenden Probleme mit Überhitzung der entsprechenden Grafikkarten in Verbindung zu bringen. Für die Durchführung von Monte-Carlo-Simulationen auf Parallelrechnerarchitekturen, wie es in vorliegender Arbeit der Fall ist, liegt es nahe auch Zufallszahlen parallel zu erzeugen. Wir präsentieren Implementierungen der Zufallszahlengeneratoren Ranlux und Mersenne Twister. Zusätzlich stellen wir eine alternative und sehr effiziente Möglichkeit vor parallele Zufallszahlen auf Nvidia Grafikkarten zu erzeugen. Alle verwendeten Zufallszahlengeneratoren wurden erfolgreich auf ihre Qualität getestet indem Monte-Carlo-Schätzer exakten Rechnungen gegenübergestellt wurden. Y1 - 2010 UR - http://edoc.hu-berlin.de/master/wende-florian-2010-10-20/PDF/wende.pdf ER -