Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)
  • search hit 2 of 5
Back to Result List

From Application to Disk: Tracing I/O Through the Big Data Stack

  • Typical applications in data science consume, process and produce large amounts of data, making disk I/O one of the dominating — and thus worthwhile optimizing — factors of their overall performance. Distributed processing frameworks, such as Hadoop, Flink and Spark, hide a lot of complexity from the programmer when they parallelize these applications across a compute cluster. This exacerbates reasoning about I/O of both the application and the framework, through the distributed file system, such as HDFS, down to the local file systems. We present SFS (Statistics File System), a modular framework to trace each I/O request issued by the application and any JVM-based big data framework involved, mapping these requests to actual disk I/O. This allows detection of inefficient I/O patterns, both by the applications and the underlying frameworks, and builds the basis for improving I/O scheduling in the big data software stack.

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics - number of accesses to the document
Metadaten
Author:Robert SchmidtkeORCiD, Florian SchintkeORCiD, Thorsten SchüttORCiD
Document Type:In Proceedings
Parent Title (English):High Performance Computing ISC High Performance 2018 International Workshops, Frankfurt/Main, Germany, June 24 - 28, 2018, Revised Selected Papers, Workshop on Performance and Scalability of Storage Systems (WOPSSS)
First Page:89
Last Page:102
Year of first publication:2018
DOI:https://doi.org/10.1007/978-3-030-02465-9_6
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.