Filtern
Dokumenttyp
- ZIB-Report (11) (entfernen)
Volltext vorhanden
- ja (11)
Gehört zur Bibliographie
- nein (11)
Schlagworte
- GPGPU (2)
- Bioinformatics (1)
- C-H...O interaction (1)
- Concurrent Kernel Execution (1)
- Connected Component Labeling (1)
- Fault-tolerance (1)
- Hyper-Q (1)
- Ising Model (1)
- Oversubscription (1)
- Performance and usage measurement (1)
- Process placement (1)
- Protein Structure Prediction (1)
- Swendsen-Wang Multi-Cluster Algorithm (1)
- System integration (1)
- System management (1)
- Xeon Phi (1)
- Xeon Phi cluster (1)
- density functional theory (1)
- dynamic analysis (1)
- effective core potential (1)
- function parameter analysis (1)
- function parameter sizes (1)
- guanine (1)
- heterogeneous systems (1)
- hydrogen bond (1)
- memory footprint (1)
- metal ion (1)
- non-linear objects (1)
- scheduling metrics (1)
- static analysis (1)
- tool flow (1)
- uracil (1)
Institut
Seit fast drei Jahren betreibt das Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) Parallelrechner der höchsten Leistungsklasse im normalen Rechenzentrumsbetrieb. Bereits im Mai 1995 hat das ZIB über seine Erfahrungen mit dem damals leistungsstärksten Parallelrechner Deutschlands berichtet. Das Gesamtkonzept des ZIB sieht weiterhin einen Höchstleistungsrechner als unabdingbaren Bestandteil des High Performance Scientific Computing (HPSC) im ZIB vor. Der vorliegende Bericht beschreibt die aktuelle Konfiguration, Betriebserfahrungen und die Rechnernutzung sowie typische Rechenleistungen, die für einzelne Anwendungsprogramme erzielt wurden. Beschreibungen der Forschungsgebiete mit den Forschungsgruppen, die den Rechner nutzen und die Anforderungen an den Rechnerausbau, die sich aus deren Arbeiten herleiten, beschließen den Bericht.
THESEUS, the ZIB threading environment, is a parallel implementation of a protein threading based on a multi-queued branch-and-bound optimal search algorithm to find the best sequence-to-structure alignment through a library of template structures. THESEUS uses a template core model based on secondary structure definition and a scoring function based on knowledge-based potentials reflecting pairwise interactions and the chemical environment, as well as pseudo energies for homology detection, loop alignment, and secondary structure matching. The threading core is implemented in C++ as a SPMD parallization architecture using MPI for communication. The environment is designed for generic testing of different scoring functions, e.g. different secondary structure prediction terms, different scoring matrices and information derived from multiple sequence alignments. A validaton of the structure prediction results has been done on the basis of standard threading benchmark sets. THESEUS successfully participated in the 6th Critical Assessment of Techniques for Protein Structure Prediction (CASP) 2004.
\small Many interesting phenomena in molecular systems like interactions between macro-molecules, protein-substrate docking, or channeling processes in membranes are gouverned to a high degree by classical Coulomb or van-der-Waals forces. The visualization of these force fields is important for verifying numerical simulations. Moreover, by inspecting the forces visually we can gain deeper insight into the molecular processes. Up to now the visualization of vector fields is quite unusual in computational chemistry. In fact many commercial software packages do not support this topic at all. The reason is not that vector fields are considered unimportant, but mainly because of the lack of adequate visualization methods. In this paper we survey a number of methods for vector field visualization, ranging from well-known concepts like arrow or streamline plots to more advanced techniques like line integral convolution, and show how these can be applied to computational chemistry. A combination of the most meaningful methods in an interactive 3D visualization environment can provide a powerful tool box for analysing simulations in molecular dynamics.
The effectiveness of ahead-of-time compiler optimization heavily depends on the amount of available information at compile time. Input-specific information that is only available at runtime cannot be used, although it often determines loop counts, branching predicates and paths, as well as memory-access patterns. It can also be crucial for generating efficient SIMD-vectorized code. This is especially relevant for the many-core architectures paving the way to exascale computing, which are more sensitive to code-optimization. We explore the design-space for using input-specific information at compile-time and present KART, a C++ library solution that allows developers to compile, link, and execute code (e.g., C, C++ , Fortran) at application runtime. Besides mere runtime compilation of performance-critical code, KART can be used to instantiate the same code multiple times using different inputs, compilers, and options. Other techniques like auto-tuning and code-generation can be integrated into a KART-enabled application instead of being scripted around it. We evaluate runtimes and compilation costs for different synthetic kernels, and show the effectiveness for two real-world applications, HEOM and a WSM6 proxy.
Current programming models for heterogeneous devices with disjoint physical memory spaces require explicit allocation of device memory and explicit data transfers. While it is quite easy to manually implement these operations for linear data objects like arrays, this task becomes more difficult for non-linear objects, e.g. linked lists or multiple inherited classes. The difficulties arise due to dynamic memory requirements at run-time and the dependencies between data structures. In this paper we present a novel method to build a graph-based static data type description which is used to create code for injectable functions that automatically determine the memory footprint of data objects at run-time. Our approach is extensible to implement automatically generated optimized data transfers across physical memory spaces.
Scheduling algorithms for heterogeneous platforms make scheduling decisions based on several metrics. One of these metrics is the amount of data to be transferred from and to the accelerator. However, the automated determination of this metric is not a simple task. A few schedulers and runtime systems solve this problem by using regression models, which are imprecise though. Our novel approach for the determination of data volumes removes this limitation and thus provides a solution to obtain exact information.
Simulations of the critical Ising model by means of local update algorithms suffer from critical slowing down. One way to partially compensate for the influence of this phenomenon on the runtime of simulations is using increasingly faster and parallel computer hardware. Another approach is using algorithms that do not suffer
from critical slowing down, such as cluster algorithms. This paper reports on the Swendsen-Wang multi-cluster algorithm on Intel Xeon Phi coprocessor 5110P, Nvidia Tesla M2090 GPU, and x86
multi-core CPU. We present shared memory versions of the said algorithm for the simulation of the two- and three-dimensional Ising model. We use a combination of local cluster search and global label reduction by means of atomic hardware primitives. Further, we
describe an MPI version of the algorithm on Xeon Phi and CPU, respectively. Significant performance improvements over known im
plementations of the Swendsen-Wang algorithm are demonstrated.
The third generation of the North German Supercomputing Alliance (HLRN) compute and storage facilities comprises a Cray XC30 architecture with exclusively Intel Ivy Bridge compute nodes. In the second phase, scheduled for November 2014, the HLRN-III configuration will undergo a
substantial upgrade together with the option of integrating
accelerator nodes into the system. To support the decision-making process, a four-node Intel Xeon Phi cluster is integrated into the present HLRN-III infrastructure at ZIB. This integration includes user/project management, file system access and job management via the HLRN-III batch system. For selected workloads, in-depth analysis, migration and optimization work on Xeon Phi is in progress. We will report our experiences and lessons learned within the Xeon Phi installation and integration
process. For selected examples, initial results of the application evaluation on the Xeon Phi cluster platform will be discussed.
With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints during the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hardware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance. The latter might be only (partially) beneficial when placing multiple applications with different computational characteristics on the same node.
Small-scale computations usually cannot fully utilize the compute capabilities of modern GPGPUs. With the Fermi GPU architecture Nvidia introduced the concurrent kernel execution feature allowing up to 16 GPU kernels to execute simultaneously on a shared GPU device for a better utilization of the respective resources. Insufficient scheduling capabilities in this respect, however, can significantly reduce the theoretical concurrency level. With the Kepler GPU architecture Nvidia addresses this issue by introducing the Hyper-Q feature with 32 hardware managed work queues for concurrent kernel execution.
We investigate the Hyper-Q feature within heterogeneous workloads with multiple concurrent host threads or processes offloading computations to the GPU each.
By means of a synthetic benchmark kernel and a hybrid parallel CPU-GPU real-world application, we evaluate the performance obtained with Hyper-Q on GPU and compare it against a kernel reordering mechanism introduced by the authors for the Fermi architecture.
Density functional study of guanine and uracil quartets and of guanine quartet metal/ion complexes
(2000)
The structures and interaction energies of guanine and uracil quartets have been determined by B3LYP hybrid density functional calculations. The total interaction energy $\Delta$E$^{T}$ of the $\it{C}$$_{4h}$-symmetric guanine quartet consisting of Hoogsteen type base pairs with two hydrogen bonds between two neighbour bases is -66.07 kcal/mol at the highest level. The uracil quartet with C6-H6...O4 interactions between the individual bases has only a small interaction energy of -20.92 kcal/mol and the interaction energy of -24.63 kcal/mol for the alternative structure with N3-H3...O4 hydrogen bonds is only slightly more negative. Cooperative effects contribute between 10 and 25 \% to all interaction energies. Complexes of metal ions with G-quartets can be classified into different structure types. The one with Ca$^{2+}$ in the central cavity adopts a $\it{C}$$_{4h}$-symmetric structure with coplanar bases, whereas the energies of the planar and non-planar Na$^{+}$ complexes are almost identical. The small ions Li$^{+}$, Be$^{2+}$, Cu$^{+}$ and Zn$^{2+}$ prefer a non-planar $\it{S}$$_{4}$-symmetric structure. The lack of co-planarity prevents probably a stacking of these base quartets. The central cavity is too small for K$^{+}$ ions and therefore this ion favours in contrast to all other investigated ions a $\it{C}$$_{4}$-symmetric complex, which is 4.73 kcal/mol more stable than the $\it{C}$$_{4h}$-symmetric one. The distance 1.665 {\AA} between K$^{+}$ and the root mean squares plane of the guanine bases is approximately half of the distance between two stacked G-quartets. The total interaction energy of alkaline earth ion complexes exceeds the ones with alkali ions. Within both groups of ions the interaction energy decreases with an increasing row position in the periodic table. The B3LYP and BLYP methods lead to similar structures and energies. Both methods are suitable for hydrogen-bonded biological systems. Compared with the before mentioned methods the HCTH functional leads to longer hydrogen bonds and different relative energies for two U-quartets. Finally we calculated also structures and relative energies with the MMFF94 forcefield. Contrary to all DFT methods, MMFF94 predicts bifurcated C-H...O contacts in the uracil quartet. In the G-quartet the MMFF94 hydrogen bond distances N2-H22...N7 are shorter than the DFT distances, whereas the N1-H1...O6 distances are longer.