TY - CHAP A1 - Störl, Uta A1 - Müller, Daniel A1 - Tekleab, Alexander A1 - Tolale, Stephane A1 - Stenzel, Julian A1 - Klettke, Meike A1 - Scherzinger, Stefanie A1 - Storl, Uta A1 - Muller, Daniel T1 - Curating Variational Data in Application Development T2 - 2018 IEEE 34th International Conference on Data Engineering, 16-19 April 2018, Paris, France N2 - Building applications for processing data lakes is a software engineering challenge. We present Darwin, a middleware for applications that operate on variational data. This concerns data with heterogeneous structure, usually stored within a schema-flexible NoSQL database. Darwin assists application developers in essential data and schema curation tasks: Upon request, Darwin extracts a schema description, discovers the history of schema versions, and proposes mappings between these versions. Users of Darwin may interactively choose which mappings are most realistic. Darwin is further capable of rewriting queries at runtime, to ensure that queries also comply with legacy data. Alternatively, Darwin can migrate legacy data to reduce the structural heterogeneity. Using Darwin, developers may thus evolve their data in sync with their code. In our hands-on demo, we curate synthetic as well as real-life datasets. KW - data migration KW - Data mining KW - Evolution (biology) KW - history KW - NoSQL databases KW - query rewriting KW - schema evolution KW - schema management KW - Software KW - Task analysis KW - variational data Y1 - 2018 U6 - https://doi.org/10.1109/ICDE.2018.00187 SP - 1605 EP - 1608 PB - IEEE ER - TY - CHAP A1 - Klettke, Meike A1 - Awolin, Hannes A1 - Störl, Uta A1 - Müller, Daniel A1 - Scherzinger, Stefanie A1 - Storl, Uta A1 - Muller, Daniel T1 - Uncovering the evolution history of data lakes T2 - 2017 IEEE International Conference on Big Data (Big Data),,11-14 Dec. 2017, Boston, MA, USA N2 - Data accumulating in data lakes can become inaccessible in the long run when its semantics are not available. The heterogeneity of data formats and the sheer volumes of data collections prohibit cleaning and unifying the data manually. Thus, tools for automated data lake analysis are of great interest. In this paper, we target the particular problem of reconstructing the schema evolution history from data lakes. Knowing how the data is structured, and how this structure has evolved over time, enables programmatic access to the lake. By deriving a sequence of schema versions, rather than a single schema, we take into account structural changes over time. Moreover, we address the challenge of detecting inclusion dependencies. This is a prerequisite for mapping between succeeding schema versions, and in particular, detecting nontrivial changes such as a property having been moved or copied. We evaluate our approach for detecting inclusion dependencies using the MovieLens dataset, as well an adaption of a dataset containing botanical descriptions, to cover specific edge cases. KW - Data mining KW - evolution operations KW - Grippers KW - history KW - inclusion dependencies KW - integrity constraints KW - Lakes KW - NoSQL databases KW - Protocols KW - schema version extraction Y1 - 2017 U6 - https://doi.org/10.1109/BigData.2017.8258204 SP - 2462 EP - 2471 PB - IEEE ER -