TY - GEN A1 - Moser, Monika A1 - Haridi, Seif A1 - Shafaat, Tallat A1 - Schütt, Thorsten A1 - Högqvist, Mikael A1 - Reinefeld, Alexander T1 - Transactional DHT Algorithms N2 - We present a framework for transactional data access on data stored in a DHT. It allows to atomically read and write items and to run distributed transactions consisting of a sequence of read and write operations on the items. Items are symmetrically replicated in order to achieve durability of data stored in the SON. To provide availability of items despite the unavailability of some replicas, operations on items are quorum-based. They make progress as long as a majority of replicas can be accessed. Our framework processes transactions optimistically with an atomic commit protocol that is based on Paxos atomic commit. We present algorithms for the whole framework with an event based notation. Additionally we discuss the problem of lookup inconsistencies and its implications on the one-copy serializability property of the transaction processing in our framework. T3 - ZIB-Report - 09-34 KW - Distributed System KW - DHT KW - Paxos KW - Algorithms KW - Storage Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-11532 SN - 1438-0064 ER - TY - GEN A1 - Schintke, Florian A1 - Reinefeld, Alexander A1 - Haridi, Seif A1 - Schütt, Thorsten T1 - Enhanced Paxos Commit for Transactions on DHTs N2 - Key/value stores which are built on structured overlay networks often lack support for atomic transactions and strong data consistency among replicas. This is unfortunate, because consistency guarantees and transactions would allow a wide range of additional application domains to benefit from the inherent scalability and fault-tolerance of DHTs. The Scalaris key/value store supports strong data consistency and atomic transactions. It uses an enhanced Paxos Commit protocol with only four communication steps rather than six. This improvement was possible by exploiting information from the replica distribution in the DHT. Scalaris enables implementation of more reliable and scalable infrastructure for collaborative Web services that require strong consistency and atomic changes across multiple items. T3 - ZIB-Report - 09-28 KW - Paxos KW - Transaktionen KW - DHT KW - strenge Konsistenz KW - Paxos KW - transactions KW - DHT KW - strong consistency Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-11448 SN - 1438-0064 ER - TY - CHAP A1 - Döbbelin, Robert A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Building Large Compressed PDBs for the Sliding Tile Puzzle T2 - Workshop on Computer Games Y1 - 2013 ER - TY - JOUR A1 - Salem, Farouk A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Data-flow scheduling for a scalable FLESnet JF - CBM Progress Report 2017 Y1 - 2018 SN - 978-3-9815227-5-4 U6 - https://doi.org/10.15120/GSI-2018-00485 SP - 130 EP - 131 ER - TY - CHAP A1 - Gholami, Masoud A1 - Schintke, Florian A1 - Schütt, Thorsten T1 - Checkpoint Scheduling for Shared Usage of Burst-Buffers in Supercomputers T2 - Proceedings of the 47th International Conference on Parallel Processing Companion; SRMPDS 2018: The 14th International Workshop on Scheduling and Resource Management for Parallel and Distributed Systems N2 - User-defined and system-level checkpointing have contrary properties. While user-defined checkpoints are smaller and simpler to recover, system-level checkpointing better knows the global system's state and parameters like the expected mean time to failure (MTTF) per node. Both approaches lead to non-optimal checkpoint time, intervals, sizes, or I/O bandwidth when concurrent checkpoints conflict and compete for it. We combine user-defined and system-level checkpointing to exploit the benefits and avoid the drawbacks of each other. Thus, applications frequently offer to create checkpoints. The system accepts such offers according to the current status and implied costs to recalculate from the last checkpoint or denies them, i.e., immediately lets continue the application without checkpoint creation. To support this approach, we develop economic models for multi-application checkpointing on shared I/O resources that are dedicated for checkpointing (e.g. burst-buffers) by defining an appropriate goal function and solving a global optimization problem. Using our models, the checkpoints of applications on a supercomputer are scheduled to effectively use the available I/O bandwidth and minimize the failure overhead (checkpoint creations plus recalculations). Our simulations show an overall reduction in failure overhead of all nodes of up to 30% for a typical supercomputer workload (HLRN). We can also derive the most cost effective burst-buffer bandwidth for a given node's MTTF and application workload. Y1 - 2018 U6 - https://doi.org/10.1145/3229710.3229755 SP - 44:1 EP - 44:10 ER - TY - GEN A1 - Gholami, Masoud A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Modeling Checkpoint Schedules for Concurrent HPC Applications T2 - CoSaS 2018 International Symposium on Computational Science at Scale Y1 - 2018 ER - TY - CHAP A1 - Schmidtke, Robert A1 - Schintke, Florian A1 - Schütt, Thorsten T1 - From Application to Disk: Tracing I/O Through the Big Data Stack T2 - High Performance Computing ISC High Performance 2018 International Workshops, Frankfurt/Main, Germany, June 24 - 28, 2018, Revised Selected Papers, Workshop on Performance and Scalability of Storage Systems (WOPSSS) N2 - Typical applications in data science consume, process and produce large amounts of data, making disk I/O one of the dominating — and thus worthwhile optimizing — factors of their overall performance. Distributed processing frameworks, such as Hadoop, Flink and Spark, hide a lot of complexity from the programmer when they parallelize these applications across a compute cluster. This exacerbates reasoning about I/O of both the application and the framework, through the distributed file system, such as HDFS, down to the local file systems. We present SFS (Statistics File System), a modular framework to trace each I/O request issued by the application and any JVM-based big data framework involved, mapping these requests to actual disk I/O. This allows detection of inefficient I/O patterns, both by the applications and the underlying frameworks, and builds the basis for improving I/O scheduling in the big data software stack. Y1 - 2018 U6 - https://doi.org/10.1007/978-3-030-02465-9_6 SP - 89 EP - 102 ER - TY - CHAP A1 - Weinhold, Carsten A1 - Lackorzynski, Adam A1 - Bierbaum, Jan A1 - Küttler, Martin A1 - Planeta, Maksym A1 - Weisbach, Hannes A1 - Hille, Matthias A1 - Härtig, Hermann A1 - Margolin, Alexander A1 - Sharf, Dror A1 - Levy, Ely A1 - Gak, Pavel A1 - Barak, Amnon A1 - Gholami, Masoud A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander A1 - Lieber, Matthias A1 - Nagel, Wolfgang T1 - FFMK: A Fast and Fault-Tolerant Microkernel-Based System for Exascale Computing T2 - Software for Exascale Computing - SPPEXA 2016-2019 Y1 - 2019 U6 - https://doi.org/10.1007/978-3-030-47956-5_16 SP - 483 EP - 516 PB - Springer ER - TY - GEN A1 - Döbbelin, Robert A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Building Large Compressed PDBs for the Sliding Tile Puzzle N2 - The performance of heuristic search algorithms depends crucially on the effectiveness of the heuristic. A pattern database (PDB) is a powerful heuristic in the form of a pre-computed lookup table. Larger PDBs provide better bounds and thus allow more cut-offs in the search process. Today, the largest PDB for the 24-puzzle is a 6-6-6-6 PDB with a size of 486 MB. We created 8-8-8, 9-8-7 and 9-9-6 PDBs that are three orders of magnitude larger (up to 1.4 TB) than the 6-6-6-6 PDB. We show how to compute such large PDBs and we present statistical and empirical data on their efficiency. The largest single PDB gives on average an 8-fold improvement over the 6-6-6-6 PDB. Combining several large PDBs gives on average an 12-fold improvement. T3 - ZIB-Report - 13-21 KW - heuristic search KW - pattern databases Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-18095 SN - 1438-0064 ER - TY - JOUR A1 - Kruber, Nico A1 - Högqvist, Mikael A1 - Schütt, Thorsten T1 - The Benefits of Estimated Global Information in DHT Load Balancing JF - Cluster Computing and the Grid, IEEE International Symposium on Y1 - 2011 U6 - https://doi.org/10.1109/CCGrid.2011.11 VL - 0 SP - 382 EP - 391 PB - IEEE Computer Society CY - Los Alamitos, CA, USA ER -