TY - GEN A1 - Schütt, Thorsten A1 - Schintke, Florian A1 - Reinefeld, Alexander T1 - Chord#: Structured Overlay Network for Non-Uniform Load-Distribution N2 - \newcommand{\chordsharp}{Chord$^\##$} Data lookup is a fundamental problem in peer-to-peer systems: Given a key, find the node that stores the associated object. Chord and other P2P algorithms use distributed hash tables (DHTs) to distribute the keys and nodes evenly across a logical ring. Using an efficient routing strategy, DHTs provide a routing performance of $O (\log N)$ in networks of $N$ nodes. While the routing performance has been shown to be optimal, the uniform key distribution makes it impossible for DHTs to support range queries. For range queries, consecutive keys must be stored on lo gically neighboring nodes. In this paper, we present an enhancement of Chord that eliminates the hash function while keeping the same routing performance. The resulting algorithm, named \chordsharp{}, provides a richer function ality while maintaining the same complexity. In addition to Chord, \chordsharp{} adapts to load imbalance. T3 - ZIB-Report - 05-40 KW - DHT KW - P2P KW - Range Queries Y1 - 2005 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-8736 ER - TY - GEN A1 - Döbbelin, Robert A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Building Large Compressed PDBs for the Sliding Tile Puzzle N2 - The performance of heuristic search algorithms depends crucially on the effectiveness of the heuristic. A pattern database (PDB) is a powerful heuristic in the form of a pre-computed lookup table. Larger PDBs provide better bounds and thus allow more cut-offs in the search process. Today, the largest PDB for the 24-puzzle is a 6-6-6-6 PDB with a size of 486 MB. We created 8-8-8, 9-8-7 and 9-9-6 PDBs that are three orders of magnitude larger (up to 1.4 TB) than the 6-6-6-6 PDB. We show how to compute such large PDBs and we present statistical and empirical data on their efficiency. The largest single PDB gives on average an 8-fold improvement over the 6-6-6-6 PDB. Combining several large PDBs gives on average an 12-fold improvement. T3 - ZIB-Report - 13-21 KW - heuristic search KW - pattern databases Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-18095 SN - 1438-0064 ER - TY - CHAP A1 - Wende, Florian A1 - Noack, Matthias A1 - Schütt, Thorsten A1 - Sachs, Stephen A1 - Steinke, Thomas T1 - Application Performance on a Cray XC30 Evaluation System with Xeon Phi Coprocessors at HLRN-III T2 - Cray User Group Y1 - 2015 ER - TY - CHAP A1 - Salem, Farouk A1 - Schütt, Thorsten A1 - Schintke, Florian A1 - Reinefeld, Alexander T1 - Scheduling Data Streams for Low Latency and High Throughput on a Cray XC40 Using Libfabric T2 - CUG Conference Proceedings N2 - Achieving efficient many-to-many communication on a given network topology is a challenging task when many data streams from different sources have to be scattered concurrently to many destinations with low variance in arrival times. In such scenarios, it is critical to saturate but not to congest the bisectional bandwidth of the network topology in order to achieve a good aggregate throughput. When there are many concurrent point-to-point connections, the communication pattern needs to be dynamically scheduled in a fine-grained manner to avoid network congestion (links, switches), overload in the node’s incoming links, and receive buffer overflow. Motivated by the use case of the Compressed Baryonic Matter experiment (CBM), we study the performance and variance of such communication patterns on a Cray XC40 with different routing schemes and scheduling approaches. We present a distributed Data Flow Scheduler (DFS) that reduces the variance of arrival times from all sources at least 30 times and increases the achieved aggregate bandwidth by up to 50%. Y1 - 2019 ER - TY - JOUR A1 - Salem, Farouk A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Scheduling data streams for low latency and high throughput on a Cray XC40 using Libfabric JF - Concurrency and Computation Practice and Experience Y1 - 2020 U6 - https://doi.org/10.1002/cpe.5563 VL - 32 IS - 20 SP - 1 EP - 14 ER - TY - JOUR A1 - Salem, Farouk A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Improving the throughput of a scalable FLESnet using the Data-Flow Scheduler JF - CBM Progress Report 2018 Y1 - 2019 SN - 978-3-9815227-6-1 U6 - https://doi.org/10.15120/GSI-2019-01018 SP - 149 EP - 150 ER - TY - JOUR A1 - Salem, Farouk A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Supporting various interconnects in FLESnet using Libfabric JF - CBM Progress Report 2016 Y1 - 2017 UR - https://repository.gsi.de/record/201318 SN - 978-3-9815227-4-7 SP - 159 EP - 160 ER - TY - JOUR A1 - Salem, Farouk A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Data-flow scheduling for a scalable FLESnet JF - CBM Progress Report 2017 Y1 - 2018 SN - 978-3-9815227-5-4 U6 - https://doi.org/10.15120/GSI-2018-00485 SP - 130 EP - 131 ER - TY - CHAP A1 - Gholami, Masoud A1 - Schintke, Florian A1 - Schütt, Thorsten T1 - Checkpoint Scheduling for Shared Usage of Burst-Buffers in Supercomputers T2 - Proceedings of the 47th International Conference on Parallel Processing Companion; SRMPDS 2018: The 14th International Workshop on Scheduling and Resource Management for Parallel and Distributed Systems N2 - User-defined and system-level checkpointing have contrary properties. While user-defined checkpoints are smaller and simpler to recover, system-level checkpointing better knows the global system's state and parameters like the expected mean time to failure (MTTF) per node. Both approaches lead to non-optimal checkpoint time, intervals, sizes, or I/O bandwidth when concurrent checkpoints conflict and compete for it. We combine user-defined and system-level checkpointing to exploit the benefits and avoid the drawbacks of each other. Thus, applications frequently offer to create checkpoints. The system accepts such offers according to the current status and implied costs to recalculate from the last checkpoint or denies them, i.e., immediately lets continue the application without checkpoint creation. To support this approach, we develop economic models for multi-application checkpointing on shared I/O resources that are dedicated for checkpointing (e.g. burst-buffers) by defining an appropriate goal function and solving a global optimization problem. Using our models, the checkpoints of applications on a supercomputer are scheduled to effectively use the available I/O bandwidth and minimize the failure overhead (checkpoint creations plus recalculations). Our simulations show an overall reduction in failure overhead of all nodes of up to 30% for a typical supercomputer workload (HLRN). We can also derive the most cost effective burst-buffer bandwidth for a given node's MTTF and application workload. Y1 - 2018 U6 - https://doi.org/10.1145/3229710.3229755 SP - 44:1 EP - 44:10 ER - TY - CHAP A1 - Weinhold, Carsten A1 - Lackorzynski, Adam A1 - Bierbaum, Jan A1 - Küttler, Martin A1 - Planeta, Maksym A1 - Weisbach, Hannes A1 - Hille, Matthias A1 - Härtig, Hermann A1 - Margolin, Alexander A1 - Sharf, Dror A1 - Levy, Ely A1 - Gak, Pavel A1 - Barak, Amnon A1 - Gholami, Masoud A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander A1 - Lieber, Matthias A1 - Nagel, Wolfgang T1 - FFMK: A Fast and Fault-Tolerant Microkernel-Based System for Exascale Computing T2 - Software for Exascale Computing - SPPEXA 2016-2019 Y1 - 2019 U6 - https://doi.org/10.1007/978-3-030-47956-5_16 SP - 483 EP - 516 PB - Springer ER -