Refine
Year of publication
Document Type
- In Proceedings (41)
- Article (26)
- ZIB-Report (12)
- Other (8)
- Book chapter (6)
- In Collection (2)
- Poster (2)
Is part of the Bibliography
- no (97)
Keywords
- DHT (3)
- Grid computing (2)
- Paxos (2)
- Algorithms (1)
- Benutzertreffen (1)
- Distributed System (1)
- Fault-tolerance (1)
- Grid-Computing (1)
- HLRN (1)
- Höchstleistungsrechner Nord (1)
In Analogie zu den Elektrizitätsnetzen electricity grid), von denen die technische Revolution ausging, wurde der Begriff Computational Grid (kurz Grid) geprägt. Ein wichtiger Bestandteil des Systems liegt im benutzerfreundlichen Zugang und der koordinierten Nutzung der weltweit verteilten Speicherressourcen und Rechnerkapazitäten. Bei der Entwicklung dazu notwendiger Technologien und Software (Middleware) profitiert man von Kenntnissen und Erfahrungen bei der Entwicklung verteilter Algorithmen, dem Software-Engineering und dem Supercomputing.
Transactional DHT Algorithms
(2009)
We present a framework for transactional data access on data stored in a DHT. It allows to atomically read and write items and to run distributed transactions consisting of a sequence of read and write operations on the items. Items are symmetrically replicated in order to achieve durability of data stored in the SON. To provide availability of items despite the unavailability of some replicas, operations on items are quorum-based. They make progress as long as a majority of replicas can be accessed. Our framework processes transactions optimistically with an atomic commit protocol that is based on Paxos atomic commit. We present algorithms for the whole framework with an event based notation. Additionally we discuss the problem of lookup inconsistencies and its implications on the one-copy serializability property of the transaction processing in our framework.
With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints during the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hardware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance. The latter might be only (partially) beneficial when placing multiple applications with different computational characteristics on the same node.
The Grid Application Toolkit: Toward Generic and Easy Appliction Programming Interfaces for the Grid
(2005)
The Clouds of Revolution
(2010)
Special Section D-Grid
(2009)
Self-Adaptation in Large-Scale Systems: A Study on Structured Overlays Across Multiple Datacenters
(2009)
Self Management of Large-Scale Distributed Systems by Combining Peer-to-Peer Networks and Components
(2005)
Achieving efficient many-to-many communication on a given network topology is a challenging task when many data streams from different sources have to be scattered concurrently to many destinations with low variance in arrival times. In such scenarios, it is critical to saturate but not to congest the bisectional bandwidth of the network topology in order to achieve a good aggregate throughput. When there are many concurrent point-to-point connections, the communication pattern needs to be dynamically scheduled in a fine-grained manner to avoid network congestion (links, switches), overload in the node’s incoming links, and receive buffer overflow. Motivated by the use case of the Compressed Baryonic Matter experiment (CBM), we study the performance and variance of such communication patterns on a Cray XC40 with different routing schemes and scheduling approaches. We present a distributed Data Flow Scheduler (DFS) that reduces the variance of arrival times from all sources at least 30 times and increases the achieved aggregate bandwidth by up to 50%.
Rechnerorganisation
(2012)
Power-User und Supercomputer
(1999)
Parallel Heuristic Search
(2009)
We present a middleware to store multidimensional data sets on Internet-scale distributed systems and to efficiently perform range queries on them. Our structured overlay network \emph{SONAR (Structured Overlay Network with Arbitrary Range queries)} puts keys which are adjacent in the key space on logically adjacent nodes in the overlay and is thereby able to process multidimensional range queries with a single logarithmic data lookup and local forwarding. The specified ranges may have arbitrary shapes like rectangles, circles, spheres or polygons. Empirical results demonstrate the routing performance of SONAR on several data sets, ranging from real-world data to artificially constructed worst case distributions. We study the quality of SONAR's routing information which is based on local knowledge only and measure the indegree of the overlay nodes to find potential hot spots in the routing process. We show that SONAR's routing table is self-adjusting, even under extreme situations, keeping always a maximum of $\lceil \log N \rceil$ routing entries.
Global grid environments do not only provide massive aggregated computing power but also an unprecedented amount of distributed storage space. Unfortunately, dynamic changes caused by component failures, local decisions, and irregular data updates make it difficult to efficiently use this capacity. In this paper, we address the problem of improving data availability in the presence of unreliable components. We present an analytical model for determining an optimal combination of distributed replica catalogs, catalog sizes, and replica servers. Empirical simulation results confirm the accuracy of our theoretical analysis. Our model captures the characteristics of highly dynamic environments like peer-to-peer networks, but it can also be applied to more centralized, less dynamic grid environments like the European {\em DataGrid}.
Next Generation Grids 2 - Requirements and Options for European Grids Research 2005-2010 and Beyond
(2004)
The success of large-scale multi-national projects like the forthcoming analysis of the LHC particle collision data at CERN relies to a great extent on the ability to efficiently utilize computing a management software (Datagrid, Globus, etc.), while the effective integration of computing nodes has been largely neglected up to now. This is the focus of our work. We present a framework for a high-performance cluster that can be used as a reliable computing node in the Grid. We outline the cluster architecture, the management of distributed data and the seamless intergration of the cluster into the Grid environment.