Refine
Year of publication
Document Type
- In Proceedings (41)
- Article (26)
- ZIB-Report (12)
- Other (8)
- Book chapter (6)
- In Collection (2)
- Poster (2)
Is part of the Bibliography
- no (97)
Keywords
- DHT (3)
- Grid computing (2)
- Paxos (2)
- Algorithms (1)
- Benutzertreffen (1)
- Distributed System (1)
- Fault-tolerance (1)
- Grid-Computing (1)
- HLRN (1)
- Höchstleistungsrechner Nord (1)
The Grid Application Toolkit: Toward Generic and Easy Appliction Programming Interfaces for the Grid
(2005)
Next Generation Grids 2 - Requirements and Options for European Grids Research 2005-2010 and Beyond
(2004)
Power-User und Supercomputer
(1999)
The performance of heuristic search algorithms depends crucially on the effectiveness of the heuristic. A pattern database (PDB) is a powerful heuristic in the form of a pre-computed lookup table. Larger PDBs provide better bounds and thus allow more cut-offs in the search process. Today, the largest PDB for the 24-puzzle is a 6-6-6-6 PDB with a size of 486 MB.
We created 8-8-8, 9-8-7 and 9-9-6 PDBs that are three orders of magnitude larger (up to 1.4 TB) than the 6-6-6-6 PDB. We show how to compute such large PDBs and we present statistical and empirical data on their efficiency. The largest single PDB gives on average an 8-fold improvement over the 6-6-6-6 PDB. Combining several large
PDBs gives on average an 12-fold improvement.
Rechnerorganisation
(2012)
Special Section D-Grid
(2009)
In Analogie zu den Elektrizitätsnetzen electricity grid), von denen die technische Revolution ausging, wurde der Begriff Computational Grid (kurz Grid) geprägt. Ein wichtiger Bestandteil des Systems liegt im benutzerfreundlichen Zugang und der koordinierten Nutzung der weltweit verteilten Speicherressourcen und Rechnerkapazitäten. Bei der Entwicklung dazu notwendiger Technologien und Software (Middleware) profitiert man von Kenntnissen und Erfahrungen bei der Entwicklung verteilter Algorithmen, dem Software-Engineering und dem Supercomputing.
Workstation clusters are often not only used for high-throughput computing in time-sharing mode but also for running complex parallel jobs in space-sharing mode. This poses several difficulties to the resource management system, which must be able to reserve computing resources for exclusive use and also to determine an optimal process mapping for a given system topology. On the basis of our CCS software, we describe the anatomy of a modern resource management system. Like Codine, Condor, and LSF, CCS provides mechanisms for the user-friendly system access and management of clusters. But unlike them, CCS is targeted at the effective support of space-sharing parallel and even metacomputers. Among other features, CCS provides a versatile resource description facility, topology-based process mapping, pluggable schedulers, and hooks to metacomputer management.
Energy flow in the Photosystem I supercomplex: comparison of approximative theories with DM-HEOM
(2018)
We analyze the exciton dynamics in PhotosystemI from Thermosynechococcus elongatus using the distributed memory implementation of the hierarchical equation of motion (DM-HEOM) for the 96 Chlorophylls in the monomeric unit. The exciton-system parameters are taken from a first principles calculation. A comparison of the exact results with Foerster rates and Markovian approximations allows one to validate the exciton transfer times within the complex and to identify deviations from approximative theories. We show the optical absorption, linear, and circular dichroism spectra obtained with DM-HEOM and compare them to experimental results.
Time- and frequency resolved optical signals provide insights into the properties of light harvesting molecular complexes, including excitation energies, dipole strengths and orientations, as well as in the exciton energy flow through the complex. The hierarchical equations of motion (HEOM) provide a unifying theory, which allows one to study the combined effects of system-environment dissipation and non-Markovian memory without making restrictive assumptions about weak or strong couplings or separability of vibrational and electronic degrees of freedom. With increasing system size the exact solution of the open quantum system dynamics requires memory and compute resources beyond a single compute node. To overcome this barrier, we developed a scalable variant of HEOM. Our distributed memory HEOM, DM-HEOM, is a universal tool for open quantum system dynamics. It is used to accurately compute all experimentally accessible time- and frequency resolved processes in light harvesting molecular complexes with arbitrary system-environment couplings for a wide range of temperatures and complex sizes.
Transactional DHT Algorithms
(2009)
We present a framework for transactional data access on data stored in a DHT. It allows to atomically read and write items and to run distributed transactions consisting of a sequence of read and write operations on the items. Items are symmetrically replicated in order to achieve durability of data stored in the SON. To provide availability of items despite the unavailability of some replicas, operations on items are quorum-based. They make progress as long as a majority of replicas can be accessed. Our framework processes transactions optimistically with an atomic commit protocol that is based on Paxos atomic commit. We present algorithms for the whole framework with an event based notation. Additionally we discuss the problem of lookup inconsistencies and its implications on the one-copy serializability property of the transaction processing in our framework.
Computing the Hierarchical Equations of Motion (HEOM) is by itself a challenging problem, and so is writing portable production code that runs efficiently on a variety of architectures while scaling from PCs to supercomputers. We combined both challenges to push the boundaries of simulating quantum systems, and to evaluate and improve methodologies for scientific software engineering.
Our contributions are threefold: We present the first distributed memory implementation of the HEOM method (DM-HEOM), we describe an interdisciplinary development workflow, and we provide guidelines and experiences for designing distributed, performance-portable HPC applications with MPI-3, OpenCL and other state-of-the-art programming models. We evaluated the resulting code on multi- and many-core CPUs as well as GPUs, and demonstrate scalability on a Cray XC40 supercomputer for the PS I molecular light harvesting complex.
Heuristic Search
(2009)
Parallel Heuristic Search
(2009)
The success of large-scale multi-national projects like the forthcoming analysis of the LHC particle collision data at CERN relies to a great extent on the ability to efficiently utilize computing a management software (Datagrid, Globus, etc.), while the effective integration of computing nodes has been largely neglected up to now. This is the focus of our work. We present a framework for a high-performance cluster that can be used as a reliable computing node in the Grid. We outline the cluster architecture, the management of distributed data and the seamless intergration of the cluster into the Grid environment.
We present a middleware to store multidimensional data sets on Internet-scale distributed systems and to efficiently perform range queries on them. Our structured overlay network \emph{SONAR (Structured Overlay Network with Arbitrary Range queries)} puts keys which are adjacent in the key space on logically adjacent nodes in the overlay and is thereby able to process multidimensional range queries with a single logarithmic data lookup and local forwarding. The specified ranges may have arbitrary shapes like rectangles, circles, spheres or polygons. Empirical results demonstrate the routing performance of SONAR on several data sets, ranging from real-world data to artificially constructed worst case distributions. We study the quality of SONAR's routing information which is based on local knowledge only and measure the indegree of the overlay nodes to find potential hot spots in the routing process. We show that SONAR's routing table is self-adjusting, even under extreme situations, keeping always a maximum of $\lceil \log N \rceil$ routing entries.