TY - GEN A1 - Schintke, Florian A1 - Reinefeld, Alexander A1 - Haridi, Seif A1 - Schütt, Thorsten T1 - Enhanced Paxos Commit for Transactions on DHTs N2 - Key/value stores which are built on structured overlay networks often lack support for atomic transactions and strong data consistency among replicas. This is unfortunate, because consistency guarantees and transactions would allow a wide range of additional application domains to benefit from the inherent scalability and fault-tolerance of DHTs. The Scalaris key/value store supports strong data consistency and atomic transactions. It uses an enhanced Paxos Commit protocol with only four communication steps rather than six. This improvement was possible by exploiting information from the replica distribution in the DHT. Scalaris enables implementation of more reliable and scalable infrastructure for collaborative Web services that require strong consistency and atomic changes across multiple items. T3 - ZIB-Report - 09-28 KW - Paxos KW - Transaktionen KW - DHT KW - strenge Konsistenz KW - Paxos KW - transactions KW - DHT KW - strong consistency Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-11448 SN - 1438-0064 ER - TY - JOUR A1 - Salem, Farouk A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Data-flow scheduling for a scalable FLESnet JF - CBM Progress Report 2017 Y1 - 2018 SN - 978-3-9815227-5-4 U6 - https://doi.org/10.15120/GSI-2018-00485 SP - 130 EP - 131 ER - TY - CHAP A1 - Gholami, Masoud A1 - Schintke, Florian A1 - Schütt, Thorsten T1 - Checkpoint Scheduling for Shared Usage of Burst-Buffers in Supercomputers T2 - Proceedings of the 47th International Conference on Parallel Processing Companion; SRMPDS 2018: The 14th International Workshop on Scheduling and Resource Management for Parallel and Distributed Systems N2 - User-defined and system-level checkpointing have contrary properties. While user-defined checkpoints are smaller and simpler to recover, system-level checkpointing better knows the global system's state and parameters like the expected mean time to failure (MTTF) per node. Both approaches lead to non-optimal checkpoint time, intervals, sizes, or I/O bandwidth when concurrent checkpoints conflict and compete for it. We combine user-defined and system-level checkpointing to exploit the benefits and avoid the drawbacks of each other. Thus, applications frequently offer to create checkpoints. The system accepts such offers according to the current status and implied costs to recalculate from the last checkpoint or denies them, i.e., immediately lets continue the application without checkpoint creation. To support this approach, we develop economic models for multi-application checkpointing on shared I/O resources that are dedicated for checkpointing (e.g. burst-buffers) by defining an appropriate goal function and solving a global optimization problem. Using our models, the checkpoints of applications on a supercomputer are scheduled to effectively use the available I/O bandwidth and minimize the failure overhead (checkpoint creations plus recalculations). Our simulations show an overall reduction in failure overhead of all nodes of up to 30% for a typical supercomputer workload (HLRN). We can also derive the most cost effective burst-buffer bandwidth for a given node's MTTF and application workload. Y1 - 2018 U6 - https://doi.org/10.1145/3229710.3229755 SP - 44:1 EP - 44:10 ER - TY - GEN A1 - Gholami, Masoud A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander T1 - Modeling Checkpoint Schedules for Concurrent HPC Applications T2 - CoSaS 2018 International Symposium on Computational Science at Scale Y1 - 2018 ER - TY - CHAP A1 - Schmidtke, Robert A1 - Schintke, Florian A1 - Schütt, Thorsten T1 - From Application to Disk: Tracing I/O Through the Big Data Stack T2 - High Performance Computing ISC High Performance 2018 International Workshops, Frankfurt/Main, Germany, June 24 - 28, 2018, Revised Selected Papers, Workshop on Performance and Scalability of Storage Systems (WOPSSS) N2 - Typical applications in data science consume, process and produce large amounts of data, making disk I/O one of the dominating — and thus worthwhile optimizing — factors of their overall performance. Distributed processing frameworks, such as Hadoop, Flink and Spark, hide a lot of complexity from the programmer when they parallelize these applications across a compute cluster. This exacerbates reasoning about I/O of both the application and the framework, through the distributed file system, such as HDFS, down to the local file systems. We present SFS (Statistics File System), a modular framework to trace each I/O request issued by the application and any JVM-based big data framework involved, mapping these requests to actual disk I/O. This allows detection of inefficient I/O patterns, both by the applications and the underlying frameworks, and builds the basis for improving I/O scheduling in the big data software stack. Y1 - 2018 U6 - https://doi.org/10.1007/978-3-030-02465-9_6 SP - 89 EP - 102 ER - TY - CHAP A1 - Weinhold, Carsten A1 - Lackorzynski, Adam A1 - Bierbaum, Jan A1 - Küttler, Martin A1 - Planeta, Maksym A1 - Weisbach, Hannes A1 - Hille, Matthias A1 - Härtig, Hermann A1 - Margolin, Alexander A1 - Sharf, Dror A1 - Levy, Ely A1 - Gak, Pavel A1 - Barak, Amnon A1 - Gholami, Masoud A1 - Schintke, Florian A1 - Schütt, Thorsten A1 - Reinefeld, Alexander A1 - Lieber, Matthias A1 - Nagel, Wolfgang T1 - FFMK: A Fast and Fault-Tolerant Microkernel-Based System for Exascale Computing T2 - Software for Exascale Computing - SPPEXA 2016-2019 Y1 - 2019 U6 - https://doi.org/10.1007/978-3-030-47956-5_16 SP - 483 EP - 516 PB - Springer ER - TY - CHAP A1 - Kindermann, S. A1 - Schintke, Florian A1 - Fritzsch, B. T1 - A Collaborative Data Management Infrastructure for Climate Data Analysis T2 - Geophysical Research Abstracts Y1 - 2012 UR - http://meetingorganizer.copernicus.org/EGU2012/EGU2012-10569.pdf U6 - https://doi.org/10013/epic.39635.d001 VL - 14, EGU2012-10569 ER - TY - CHAP A1 - Enke, Harry A1 - Fiedler, Norman A1 - Fischer, Thomas A1 - Gnadt, Timo A1 - Ketzan, Erik A1 - Ludwig, Jens A1 - Rathmann, Torsten A1 - Stöckle, Gabriel A1 - Schintke, Florian ED - Enke, Harry ED - Ludwig, Jens T1 - Leitfaden zum Forschungsdaten-Management T2 - Leitfaden zum Forschungsdaten-Management Y1 - 2013 PB - Verlag Werner Hülsbusch, Glückstadt ER - TY - JOUR A1 - Enke, Harry A1 - Partl, Adrian A1 - Reinefeld, Alexander A1 - Schintke, Florian T1 - Handling Big Data in Astronomy and Astrophysics JF - Datenbank-Spektrum Y1 - 2012 UR - http://dx.doi.org/10.1007/s13222-012-0099-1 U6 - https://doi.org/10.1007/s13222-012-0099-1 VL - 12 IS - 3 SP - 173 EP - 181 PB - Springer-Verlag ER - TY - JOUR A1 - Schintke, Florian T1 - XtreemFS & Scalaris JF - Science & Technology Y1 - 2013 IS - 6 SP - 54 EP - 55 PB - Pan European Networks ER -