TY - JOUR A1 - Schönberger, Manuel A1 - Scherzinger, Stefanie A1 - Mauerer, Wolfgang T1 - Ready to Leap (by Co-Design)? Join Order Optimisation on Quantum Hardware JF - Proceedings of the ACM on Management of Data, PACMMOD N2 - The prospect of achieving computational speedups by exploiting quantum phenomena makes the use of quantum processing units (QPUs) attractive for many algorithmic database problems. Query optimisation, which concerns problems that typically need to explore large search spaces, seems like an ideal match for the known quantum algorithms. We present the first quantum implementation of join ordering, which is one of the most investigated and fundamental query optimisation problems, based on a reformulation to quadratic binary unconstrained optimisation problems. We empirically characterise our method on two state-of-the-art approaches (gate-based quantum computing and quantum annealing), and identify speed-ups compared to the best know classical join ordering approaches for input sizes that can be processed with current quantum annealers. However, we also confirm that limits of early-stage technology are quickly reached. Current QPUs are classified as noisy, intermediate scale quantum computers (NISQ), and are restricted by a variety of limitations that reduce their capabilities as compared to ideal future quantum computers, which prevents us from scaling up problem dimensions and reaching practical utility. To overcome these challenges, our formulation accounts for specific QPU properties and limitations, and allows us to trade between achievable solution quality and possible problem size. In contrast to all prior work on quantum computing for query optimisation and database-related challenges, we go beyond currently available QPUs, and explicitly target the scalability limitations: Using insights gained from numerical simulations and our experimental analysis, we identify key criteria for co-designing QPUs to improve their usefulness for join ordering, and show how even relatively minor physical architectural improvements can result in substantial enhancements. Finally, we outline a path towards practical utility of custom-designed QPUs. KW - Hardware KW - Quantum cmputation KW - Informations systems KW - Join algorithms Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-56634 N1 - Corresponding author: Manuel Schönberger VL - 1 IS - 1 SP - 1 EP - 27 PB - ACM CY - New York, NY, ER - TY - JOUR A1 - Scharfenberg, Georg A1 - Mottok, Jürgen A1 - Artmann, Christina A1 - Hobelsberger, Martin A1 - Paric, Ivan A1 - Großmann, Benjamin A1 - Pohlt, Clemens A1 - Wackerbarth, Alena A1 - Pausch, Uli A1 - Heidrich, Christiane A1 - Fadanelli, Martin A1 - Elsner, Michael A1 - Pöcher, Daniel A1 - Pittroff, Lenz A1 - Beer, Stefan A1 - Brückl, Oliver A1 - Haslbeck, Matthias A1 - Sterner, Michael A1 - Thema, Martin A1 - Muggenthaler, Nicole A1 - Lenck, Thorsten A1 - Götz, Philipp A1 - Eckert, Fabian A1 - Deubzer, Michael A1 - Stingl, Armin A1 - Simsek, Erol A1 - Krämer, Stefan A1 - Großmann, Benjamin A1 - Schlegl, Thomas A1 - Niedersteiner, Sascha A1 - Berlehner, Thomas A1 - Joblin, Mitchell A1 - Mauerer, Wolfgang A1 - Apel, Sven A1 - Siegmund, Janet A1 - Riehle, Dirk A1 - Weber, Joachim A1 - Palm, Christoph A1 - Zobel, Martin A1 - Al-Falouji, Ghassan A1 - Prestel, Dietmar A1 - Scharfenberg, Georg A1 - Mandl, Roland A1 - Deinzer, Arnulf A1 - Halang, W. A1 - Margraf-Stiksrud, Jutta A1 - Sick, Bernhard A1 - Deinzer, Renate A1 - Scherzinger, Stefanie A1 - Klettke, Meike A1 - Störl, Uta A1 - Wiech, Katharina A1 - Kubata, Christoph A1 - Sindersberger, Dirk A1 - Monkman, Gareth J. A1 - Dollinger, Markus A1 - Dembianny, Sven A1 - Kölbl, Andreas A1 - Welker, Franz A1 - Meier, Matthias A1 - Thumann, Philipp A1 - Swidergal, Krzysztof A1 - Wagner, Marcus A1 - Haug, Sonja A1 - Vernim, Matthias A1 - Seidenstücker, Barbara A1 - Weber, Karsten A1 - Arsan, Christian A1 - Schone, Reinhold A1 - Münder, Johannes A1 - Schroll-Decker, Irmgard A1 - Dillinger, Andrea Elisabeth A1 - Fuchshofer, Rudolf A1 - Monkman, Gareth J. A1 - Shamonin (Chamonine), Mikhail A1 - Geith, Markus A. A1 - Koch, Fabian A1 - Ühlin, Christian A1 - Schratzenstaller, Thomas A1 - Saßmannshausen, Sean Patrick A1 - Auchter, Eberhard A1 - Kriz, Willy A1 - Springer, Othmar A1 - Thumann, Maria A1 - Kusterle, Wolfgang A1 - Obermeier, Andreas A1 - Udalzow, Anton A1 - Schmailzl, Anton A1 - Hierl, Stefan A1 - Langer, Christoph A1 - Schreiner, Rupert ED - Baier, Wolfgang T1 - Forschungsbericht / Ostbayerische Technische Hochschule Regensburg T3 - Forschungsberichte der OTH Regensburg - 2015 Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-13867 SN - 978-3-00-048589-3 CY - Regensburg ER - TY - JOUR A1 - Beimler, Josef A1 - Leißl, Caroline A1 - Ebner, Lena A1 - Elsner, Michael A1 - Mühlbauer, Gerhard A1 - Kohlert, Dieter A1 - Schubert, Martin J. W. A1 - Weiß, Andreas P. A1 - Sterner, Michael A1 - Raith, Thomas A1 - Afranseder, Martin A1 - Krapf, Tobias A1 - Mottok, Jürgen A1 - Siemers, Christian A1 - Großmann, Benjamin A1 - Höcherl, Johannes A1 - Schlegl, Thomas A1 - Schneider, Ralph A1 - Milaev, Johannes A1 - Rampelt, Christina A1 - Roduner, Christian A1 - Glowa, Christoph A1 - Bachl, Christoph A1 - Schliekmann, Claus A1 - Gnan, Alfons A1 - Grill, Martin A1 - Ruhland, Karl A1 - Piehler, Thomas A1 - Friers, Daniel A1 - Wels, Harald A1 - Pflug, Kenny A1 - Kucera, Markus A1 - Waas, Thomas A1 - Schlachetzki, Felix A1 - Boy, Sandra A1 - Pemmerl, Josef A1 - Leis, Alexander A1 - Welsch, Andreas F.X. A1 - Graf, Franz A1 - Zenger, Gerhard A1 - Volbert, Klaus A1 - Waas, Thomas A1 - Scherzinger, Stefanie A1 - Klettke, Meike A1 - Störl, Uta A1 - Heyl, C. A1 - Boldenko, A. A1 - Monkman, Gareth J. A1 - Kujat, Richard A1 - Briem, Ulrich A1 - Hierl, Stefan A1 - Talbot, Sebastian A1 - Schmailzl, Anton A1 - Ławrowski, Robert Damian A1 - Prommesberger, Christian A1 - Langer, Christoph A1 - Dams, Florian A1 - Schreiner, Rupert A1 - Valentino, Piergiorgio A1 - Romano, Marco A1 - Ehrlich, Ingo A1 - Furgiuele, Franco A1 - Gebbeken, Norbert A1 - Eisenried, Michael A1 - Jungbauer, Bastian A1 - Hutterer, Albert A1 - Bauhuber, Michael A1 - Mikrievskij, Andreas A1 - Argauer, Monika A1 - Hummel, Helmut A1 - Lechner, Alfred A1 - Liebetruth, Thomas A1 - Schumm, Michael A1 - Joseph, Saskia A1 - Reschke, Michael A1 - Soska, Alexander A1 - Schroll-Decker, Irmgard A1 - Putzer, Michael A1 - Rasmussen, John A1 - Dendorfer, Sebastian A1 - Weber, Tim A1 - Al-Munajjed, Amir Andreas A1 - Verkerke, Gijsbertus Jacob A1 - Renkawitz, Tobias A1 - Haug, Sonja A1 - Rudolph, Clarissa A1 - Zeitler, Annika A1 - Schaubeck, Simon A1 - Steffens, Oliver A1 - Rechenauer, Christian A1 - Schulz-Brize, Thekla A1 - Fleischmann, Florian A1 - Kusterle, Wolfgang A1 - Beer, Anne A1 - Wagner, Bernd A1 - Neidhart, Thomas ED - Baier, Wolfgang T1 - Forschungsbericht 2013 T3 - Forschungsberichte der OTH Regensburg - 2013 Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-7990 CY - Regensburg ER - TY - JOUR A1 - Weber, Karsten A1 - Dendorfer, Sebastian A1 - Süß, Franz A1 - Kubowitsch, Simone A1 - Schratzenstaller, Thomas A1 - Haug, Sonja A1 - Mohr, Christa A1 - Kiesl, Hans A1 - Drechsler, Jörg A1 - Westner, Markus A1 - Kobus, Jörn A1 - Schubert, Martin J. W. A1 - Zenger, Stefan A1 - Pietsch, Alexander A1 - Weiß, Josef A1 - Hinterseer, Sebastian A1 - Schieck, Roland A1 - Scherzinger, Stefanie A1 - Klettke, Meike A1 - Ringlstetter, Andreas A1 - Störl, Uta A1 - Bissyandé, Tegawendé F. A1 - Seeburger, Achim A1 - Schindler, Timo A1 - Ramsauer, Ralf A1 - Kiszka, Jan A1 - Kölbl, Andreas A1 - Lohmann, Daniel A1 - Mauerer, Wolfgang A1 - Maier, Johannes A1 - Scorna, Ulrike A1 - Palm, Christoph A1 - Soska, Alexander A1 - Mottok, Jürgen A1 - Ellermeier, Andreas A1 - Vögele, Daniel A1 - Hierl, Stefan A1 - Briem, Ulrich A1 - Buschmann, Knut A1 - Ehrlich, Ingo A1 - Pongratz, Christian A1 - Pielmeier, Benjamin A1 - Tyroller, Quirin A1 - Monkman, Gareth J. A1 - Gut, Franz A1 - Roth, Carina A1 - Hausler, Peter A1 - Bierl, Rudolf A1 - Prommesberger, Christian A1 - Ławrowski, Robert Damian A1 - Langer, Christoph A1 - Schreiner, Rupert A1 - Huang, Yifeng A1 - She, Juncong A1 - Ottl, Andreas A1 - Rieger, Walter A1 - Kraml, Agnes A1 - Poxleitner, Thomas A1 - Hofer, Simon A1 - Heisterkamp, Benjamin A1 - Lerch, Maximilian A1 - Sammer, Nike A1 - Golde, Olivia A1 - Wellnitz, Felix A1 - Schmid, Sandra A1 - Muntschick, Claudia A1 - Kusterle, Wolfgang A1 - Paric, Ivan A1 - Brückl, Oliver A1 - Haslbeck, Matthias A1 - Schmidt, Ottfried A1 - Schwanzer, Peter A1 - Rabl, Hans-Peter A1 - Sterner, Michael A1 - Bauer, Franz A1 - Steinmann, Sven A1 - Eckert, Fabian A1 - Hofrichter, Andreas ED - Baier, Wolfgang T1 - Forschungsbericht 2017 T3 - Forschungsberichte der OTH Regensburg - 2017 KW - Forschung KW - Forschungsbericht Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-13835 SN - 978-3-9818209-3-5 CY - Regensburg ER - TY - GEN A1 - Mintel, Mario A1 - Ramsauer, Ralf A1 - Lohmann, Daniel A1 - Scherzinger, Stefanie A1 - Mauerer, Wolfgang T1 - Fork à la carte für In-Memory-Datenbanken T2 - Frühjahrstreffen der Fachgruppen Betriebssysteme, Hamburg, 17. März 2022 Y1 - 2022 UR - https://www.lfdr.de/Publications/2022/FGBS_Spring22_Mintel.pdf ER - TY - CHAP A1 - Scherzinger, Stefanie A1 - Seifert, Christin A1 - Wiese, Lena T1 - The Best of Both Worlds: Challenges in Linking Provenance and Explainability in Distributed Machine Learning T2 - 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), 7-10 July 2019, Dallas, TX, USA N2 - Machine learning experts prefer to think of their input as a single, homogeneous, and consistent data set. However, when analyzing large volumes of data, the entire data set may not be manageable on a single server, but must be stored on a distributed file system instead. Moreover, with the pressing demand to deliver explainable models, the experts may no longer focus on the machine learning algorithms in isolation, but must take into account the distributed nature of the data stored, as well as the impact of any data pre-processing steps upstream in their data analysis pipeline. In this paper, we make the point that even basic transformations during data preparation can impact the model learned, and that this is exacerbated in a distributed setting. We then sketch our vision of end-to-end explainability of the model learned, taking the pre-processing into account. In particular, we point out the potentials of linking the contributions of research on data provenance with the efforts on explainability in machine learning. In doing so, we highlight pitfalls we may experience in a distributed system on the way to generating more holistic explanations for our machine learning models. KW - Computational modeling KW - Data models KW - Decision trees KW - distributed computing KW - Distributed databases KW - explainable machine learning KW - Machine learning Y1 - 2019 U6 - https://doi.org/10.1109/ICDCS.2019.00161 SP - 1620 EP - 1629 PB - IEEE ER - TY - CHAP A1 - Hillenbrand, Andrea A1 - Levchenko, Maksym A1 - Störl, Uta A1 - Scherzinger, Stefanie A1 - Klettke, Meike ED - Boncz, Peter ED - Manegold, Stefan ED - Ailamaki, Anastasia ED - Deshpande, Amol ED - Kraska, Tim T1 - MigCast : Putting a Price Tag on Data Model Evolution in NoSQL Data Stores T2 - Proceedings of the 2019 International Conference on Management of Data (SIGMOD/PODS '19) June 2019, Amsterdam, Netherlands N2 - We demonstrate MigCast, a tool-based advisor for exploring data migration strategies in the context of developing NoSQL-backed applications. Users of MigCast can consider their options for evolving their data model along with legacy data already persisted in the cloud-hosted production data-base. They can explore alternative actions as the financial costs are predicted respective to the cloud provider chosen. Thereby they are better equipped to assess potential consequences of imminent data migration decisions. To this end, MigCast maintains an internal cost model, taking into account characteristics of the data instance, expected work-load, data model changes, and cloud provider pricing models. Hence, MigCast enables software project stakeholders to remain in control of the operative costs and to make informed decisions evolving their applications. KW - Data Migration Strategies KW - latency KW - migration costs KW - NoSQL databases KW - Predictive Migration KW - schema evolution Y1 - 2019 SN - 9781450356435 U6 - https://doi.org/10.1145/3299869.3320223 SP - 1925 EP - 1928 PB - ACM CY - New York, NY, USA ER - TY - CHAP A1 - Seifert, Christin A1 - Scherzinger, Stefanie A1 - Wiese, Lena T1 - Towards Generating Consumer Labels for Machine Learning Models T2 - 2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI), 12-14 Dec. 2019, Dallas, TX, USA N2 - Machine learning (ML) based decision making is becoming commonplace. For persons affected by ML-based decisions, a certain level of transparency regarding the properties of the underlying ML model can be fundamental. In this vision paper, we propose to issue consumer labels for trained and published ML models. These labels primarily target machine learning lay persons, such as the operators of an ML system, the executors of decisions, and the decision subjects themselves. Provided that consumer labels comprehensively capture the characteristics of the trained ML model, consumers are enabled to recognize when human intelligence should supersede artificial intelligence. In the long run, we envision a service that generates these consumer labels (semi-)automatically. In this paper, we survey the requirements that an ML system should meet, and correspondingly, the properties that an ML consumer label could capture. We further discuss the feasibility of operationalizing and benchmarking these requirements in the automated generation of ML consumer labels. KW - Artificial-intelligence-machine-learning-consumer-labels-transparency-x-AI KW - Biological system modeling KW - Data models KW - Machine learning KW - Measurement KW - Predictive models KW - Robustness KW - Training Y1 - 2019 U6 - https://doi.org/10.1109/CogMI48466.2019.00033 SP - 173 EP - 179 PB - IEEE ER - TY - CHAP A1 - Holubová, Irena A1 - Scherzinger, Stefanie ED - Groppe, Sven ED - Gruenwald, Le T1 - Unlocking the potential of nextGen multi-model databases for semantic big data projects T2 - Proceedings of the International Workshop on Semantic Big Data - (SBD '19) 05.07.2019 - 05.07.2019, Amsterdam, Netherlands N2 - A new vision in semantic big data processing is to create enterprise data hubs, with a 360° view on all data that matters to a corporation. As we discuss in this paper, a new generation of multi-model database systems seems a promising architectural choice for building such scalable, non-native triple stores. In this paper, we first characterize this new generation of multi-model databases. Then, discussing an example scenario, we show how they allow for agile and flexible schema management, spanning a large design space for creative and incremental data modelling. We identify the challenge of generating sound triple-views from data stored in several, interlinked models, for SPARQL querying. We regard this as one of several appealing research challenges where the semantic big data and the database architecture community may join forces. Y1 - 2019 SN - 9781450367660 U6 - https://doi.org/10.1145/3323878.3325807 SP - 1 EP - 6 PB - ACM Press CY - New York ER - TY - JOUR A1 - Störl, Uta A1 - Klettke, Meike A1 - Scherzinger, Stefanie T1 - Kurz erklärt: Objekt-NoSQL-Mapping JF - Datenbank-Spektrum Y1 - 2016 U6 - https://doi.org/10.1007/s13222-016-0212-y VL - 16 IS - 1 SP - 83 EP - 87 PB - Springer ER - TY - CHAP A1 - Schönberger, Manuel A1 - Franz, Maja A1 - Scherzinger, Stefanie A1 - Mauerer, Wolfgang T1 - Peel | Pile? Cross-Framework Portability of Quantum Software T2 - 2022 IEEE 19th International Conference on Software Architecture Companion (ICSA-C), 12-15 March 2022, Honolulu, HI, USA N2 - In recent years, various vendors have made quantum software frameworks available. Yet with vendor-specific frameworks, code portability seems at risk, especially in a field where hardware and software libraries have not yet reached a consolidated state, and even foundational aspects of the technologies are still in flux. Accordingly, the development of vendor-independent quantum programming languages and frameworks is often suggested. This follows the established architectural pattern of introducing additional levels of abstraction into software stacks, thereby piling on layers of abstraction. Yet software architecture also provides seemingly less abstract alternatives, namely to focus on hardware-specific formulations of problems that peel off unnecessary layers. In this article, we quantitatively and experimentally explore these strategic alternatives, and compare popular quantum frameworks from the software implementation perspective. We find that for several specific, yet generalisable problems, the mathematical formulation of the problem to be solved is not just sufficiently abstract and serves as precise description, but is likewise concrete enough to allow for deriving framework-specific implementations with little effort. Additionally, we argue, based on analysing dozens of existing quantum codes, that porting between frameworks is actually low-effort, since the quantum- and framework-specific portions are very manageable in terms of size, commonly in the order of mere hundreds of lines of code. Given the current state-of-the-art in quantum programming practice, this leads us to argue in favour of peeling off unnecessary abstraction levels. KW - Computer Science KW - Quantum Physics KW - Software Engineering Y1 - 2022 U6 - https://doi.org/10.1109/ICSA-C54293.2022.00039 N1 - Preprint unter: https://arxiv.org/abs/2203.06289 PB - IEEE ER - TY - CHAP A1 - Mauerer, Wolfgang A1 - Ramsauer, Ralf A1 - Lucas, Edson R. F. A1 - Scherzinger, Stefanie T1 - Silentium! Run-Analyse-Eradicate the Noise out of the DB/OS Stack T2 - Datenbanksysteme für Business, Technologie und Web (BTW 2021): 13.-17. September 2021, Dresden, Deutschland N2 - When multiple tenants compete for resources, database performance tends to suffer. Yet there are scenarios where guaranteed sub-millisecond latencies are crucial, such as in real-time data processing, IoT devices, or when operating in safety-critical environments. In this paper, we study how to make query latencies deterministic in the face of noise (whether caused by other tenants or unrelated operating system tasks). We perform controlled experiments with an in-memory database engine in a multi-tenant setting, where we successively eradicate noisy interference from within the system software stack, to the point where the engine runs close to bare-metal on the underlying hardware. We show that we can achieve query latencies comparable to the database engine running as the sole tenant, but without noticeably impacting the workload of competing tenants. We discuss these results in the context of ongoing efforts to build custom operating systems for database workloads, and point out that for certain use cases, the margin for improvement is rather narrow. In fact, for scenarios like ours, existing operating systems might just be good enough, provided that they are expertly configured. We then critically discuss these findings in the light of a broader family of database systems (e.g., including disk-based), and how to extend the approach of this paper accordingly. Low-latency databases; tail latency; real-time databases; bounded-time query processing; DB-OS co-engineering KW - bounded-time query processing KW - DB-OS co-engineering KW - Low-latency databases KW - real-time databases KW - tail latency Y1 - 2021 U6 - https://doi.org/10.18420/btw2021-21 SP - 397 EP - 421 PB - Gesellschaft für Informatik ER - TY - CHAP A1 - Mauerer, Wolfgang A1 - Scherzinger, Stefanie ED - Ailamaki, Anastasia T1 - Nullius in Verba: Reproducibility for Database Systems Research, Revisited T2 - 2021 IEEE 37th International Conference on Data Engineering (ICDE 2021): 19-22 April 2021, Chania, Greece N2 - Over the last decade, reproducibility of experimental results has been a prime focus in database systems research, and many high-profile conferences award results that can be independently verified. Since database systems research involves complex software stacks that non-trivially interact with hardware, sharing experimental setups is anything but trivial: Building a working reproduction package goes far beyond providing a DOI to some repository hosting data, code, and setup instructions.This tutorial revisits reproducible engineering in the face of state-of-the-art technology, and best practices gained in other computer science research communities. In particular, in the hands-on part, we demonstrate how to package entire system software stacks for dissemination. To ascertain long-term reproducibility over decades (or ideally, forever), we discuss why relying on open source technologies massively employed in industry has essential advantages over approaches crafted specifically for research. Supplementary material shows how version control systems that allow for non-linearly rewriting recorded history can document the structured genesis behind experimental setups in a way that is substantially easier to understand, without involvement of the original authors, compared to detour-ridden, strictly historic evolution. KW - Control systems KW - Data engineering KW - Database systems KW - docker KW - git KW - Hardware KW - Industries KW - Reproducibility of results KW - reproducible science KW - reproduction, reproduction package KW - scientific attribution KW - scientific method KW - Tutorials Y1 - 2021 SN - 978-1-7281-9184-3 U6 - https://doi.org/10.1109/ICDE51399.2021.00270 SP - 2377 EP - 2380 PB - IEEE CY - Piscataway, NJ ER - TY - CHAP A1 - Scherzinger, Stefanie A1 - Mauerer, Wolfgang A1 - Kondylakis, Haridimos ED - Ailamaki, Anastasia T1 - DeBinelle: Semantic Patches for Coupled Database-Application Evolution T2 - 2021 IEEE 37th International Conference on Data Engineering (ICDE 2021): 19-22 April 2021, Chania, Greece N2 - Databases are at the core of virtually any software product. Changes to database schemas cannot be made in isolation, as they are intricately coupled with application code. Such couplings enforce collateral evolution, which is a recognised, important research problem. In this demonstration, we show a new dimension to this problem, in software that supports alternative database backends: vendor-specific SQL dialects necessitate a simultaneous evolution of both, database schema and program code, for all supported DB variants. These near-same changes impose substantial manual effort for software developers. We introduce DeBinelle, a novel framework and domain-specific language for semantic patches that abstracts DB-variant schema changes and coupled program code into a single, unified representation. DeBinelle further offers a novel alternative to manually evolving coupled schemas and code. DeBinelle considerably extends established, seminal results in software engineering research, supporting several programming languages, and the many dialects of SQL. It effectively eliminates the need to perform vendor-specific changes, replacing them with intuitive semantic patches. Our demo of DeBinelle is based on real-world use cases from reference systems for schema evolution. KW - database management systems KW - databases KW - evolution KW - programming language semantics KW - semantic patches KW - software product lines KW - specification languages KW - SQL Y1 - 2021 SN - 978-1-7281-9184-3 U6 - https://doi.org/10.1109/ICDE51399.2021.00307 SP - 2697 EP - 2700 PB - IEEE CY - Piscataway, NJ ER - TY - CHAP A1 - Fruth, Michael A1 - Scherzinger, Stefanie A1 - Mauerer, Wolfgang A1 - Ramsauer, Ralf ED - Nambiar, Raghunath ED - Poess, Meikel T1 - Tell-Tale Tail Latencies: Pitfalls and Perils in Database Benchmarking T2 - Performance evaluation and benchmarking, 13th TPC Technology Conference (TPCTC 2021): Copenhagen, Denmark, August 20, 2021, Revised Selected Papers N2 - The performance of database systems is usually characterised by their average-case (i.e., throughput) behaviour in standardised or de-facto standard benchmarks like TPC-X or YCSB. While tails of the latency (i.e., response time) distribution receive considerably less attention, they have been identified as a threat to the overall system performance: In large-scale systems, even a fraction of requests delayed can build up into delays perceivable by end users. To eradicate large tail latencies from database systems, the ability to faithfully record them, and likewise pinpoint them to the root causes, is imminently required. In this paper, we address the challenge of measuring tail latencies using standard benchmarks, and identify subtle perils and pitfalls. In particular, we demonstrate how Java-based benchmarking approaches can substantially distort tail latency observations, and discuss how the discovery of such problems is inhibited by the common focus on throughput performance. We make a case for purposefully re-designing database benchmarking harnesses based on these observations to arrive at faithful characterisations of database performance from multiple important angles. KW - Benchmark harness KW - Database benchmarks KW - Tail latencies Y1 - 2022 SN - 9783030944377 U6 - https://doi.org/10.1007/978-3-030-94437-7_8 SP - 119 EP - 134 PB - Springer CY - Cham, Switzerland ER - TY - CHAP A1 - Mauerer, Wolfgang A1 - Scherzinger, Stefanie T1 - 1-2-3 Reproducibility for Quantum Software Experiments T2 - 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), Honolulu, HI, USA, 15-18 March 2022 N2 - Various fields of science face a reproducibility crisis. For quantum software engineering as an emerging field, it is therefore imminent to focus on proper reproducibility engineering from the start. Yet the provision of reproduction packages is almost universally lacking. Actionable advice on how to build such packages is rare, particularly unfortunate in a field with many contributions from researchers with backgrounds outside computer science. In this article, we argue how to rectify this deficiency by proposing a 1-2-3~approach to reproducibility engineering for quantum software experiments: Using a meta-generation mechanism, we generate DOI-safe, long-term functioning and dependency-free reproduction packages. They are designed to satisfy the requirements of professional and learned societies solely on the basis of project-specific research artefacts (source code, measurement and configuration data), and require little temporal investment by researchers. Our scheme ascertains long-term traceability even when the quantum processor itself is no longer accessible. By drastically lowering the technical bar, we foster the proliferation of reproduction packages in quantum software experiments and ease the inclusion of non-CS researchers entering the field. KW - Computer Science KW - Quantum Physics KW - Software Engineering Y1 - 2022 U6 - https://doi.org/10.1109/SANER53432.2022.00148 SP - 1247 EP - 1248 PB - IEEE ER - TY - CHAP A1 - Braininger, Dimitri A1 - Mauerer, Wolfgang A1 - Scherzinger, Stefanie ED - Grossmann, Georg ED - Ram, Sudha T1 - Replicability and Reproducibility of a Schema Evolution Study in Embedded Databases T2 - Advances in conceptual modeling: ER 2020 Workshops CMAI, CMLS, CMOMM4FAIR, CoMoNoS, EmpER, Vienna, Austria, November 3-6, 2020, Proceedings N2 - Ascertaining the feasibility of independent falsification or repetition of published results is vital to the scientific process, and replication or reproduction experiments are routinely performed in many disciplines. Unfortunately, such studies are only scarcely available in database research, with few papers dedicated to re-evaluating published results. In this paper, we conduct a case study on replicating and reproducing a study on schema evolution in embedded databases. We can exactly repeat the outcome for one out of four database applications studied, and come close in two further cases. By reporting results, efforts, and obstacles encountered, we hope to increase appreciation for the substantial efforts required to ensure reproducibility. By discussing minutiae details required to ascertain reproducible work, we argue that such important, but often ignored aspects of scientific work should receive more credit in the evaluation of future research. KW - replicability KW - reproducibility KW - schema evolution Y1 - 2020 SN - 978-3-030-65846-5 U6 - https://doi.org/10.1007/978-3-030-65847-2_19 VL - 12584 SP - 210 EP - 219 PB - Springer CY - Cham ER - TY - CHAP A1 - Mauerer, Wolfgang A1 - Scherzinger, Stefanie ED - Krusche, Stephan ED - Wagner, Stefan T1 - Educating Future Software Architects in the Art and Science of Analysing Software Data. T2 - SEUH 2020: Software Engineering im Unterricht der Hochschulen, Tagungsband des 17. Workshops "Software Engineering im Unterricht der Hochschulen", Innsbruck, Österreich, 26. - 27.02.2020 N2 - We report the design and teaching experience of a Master-level seminar course on quantitative and empirical software engineering. The course combines elements of traditional literature seminars with active learning by scientific project work, in particular quantitative mixed-method analyses of open source systems. It also provides short introductions and refreshers to data mining and statistical analysis, and discusses the nature and practice of scientific knowledge inference. Student presentations of published research, augmented by summary reports, bridge to standard seminars. We discuss our educational goals and the course structure derived from them. We review research questions addressed by students in mini research reports, and analyse them as tokens on how junior-level software engineers perceive the potential of empirical software engineering research. We assess challenges faced, and discuss possible solutions. Y1 - 2020 UR - http://ceur-ws.org/Vol-2531/paper10.pdf SP - 56 EP - 60 PB - RWTH Aachen ER - TY - CHAP A1 - Haubold, Florian A1 - Schildgen, Johannes A1 - Scherzinger, Stefanie A1 - Deßloch, Stefan ED - Mitschang, Bernhard T1 - ControVol Flex: Flexible Schema Evolution for NoSQL Application Development T2 - Datenbanksysteme für Business, Technologie und Web (BTW 2017) : 17. Fachtagung des GI-Fachbereichs "Datenbanken und Informationssyteme" (DBIS) : 06.-10.03.2017 in Stuttgart Deutschland N2 - We demonstrate ControVol Flex, an Eclipse plugin for controlled schema evolution in Java applications backed by NoSQL document stores. The sweet spot of our tool are applications that are deployed continuously against the same production data store: Each new release may bring about schema changes that conflict with legacy data already stored in production. The type system internal to the predecessor tool ControVol is able to detect common schema conflicts, and enables developers to resolve them with the help of object-mapper annotations. Our new tool ControVol Flex lets developers choose their schema-migration strategy, whether all legacy data is to be migrated eagerly by means of NotaQL transformation scripts, or lazily, as declared by object-mapper annotations. Our tool is even capable of carrying out both strategies in combination, eagerly migrating data in the background, while lazily migrating data that is meanwhile accessed by the application. From the viewpoint of the application, it remains transparent how legacy data is migrated: Every read access yields an entity that matches the structure that the current application code expects. Our live demo shows how ControVol Flex gracefully solves a broad range of common schema-evolution tasks. KW - Schema evolution KW - NotaQL KW - NoSQL Y1 - 2017 PB - Gesellschaft für Informatik e.V. (GI) CY - Bonn ER - TY - CHAP A1 - Klettke, Meike A1 - Störl, Uta A1 - Shenavai, Manuel A1 - Scherzinger, Stefanie A1 - Storl, Uta T1 - NoSQL schema evolution and big data migration at scale T2 - 2016 IEEE International Conference on Big Data (Big Data), 5-8 Dec. 2016, Washington, DC N2 - This paper explores scalable implementation strategies for carrying out lazy schema evolution in NoSQL data stores. For decades, schema evolution has been an evergreen in database research. Yet new challenges arise in the context of cloud-hosted data backends: With all database reads and writes charged by the provider, migrating the entire data instance eagerly into a new schema can be prohibitively expensive. Thus, lazy migration may be more cost-efficient, as legacy entities are only migrated in case they are actually accessed by the application. Related work has shown that the overhead of migrating data lazily is affordable when a single evolutionary change is carried out, such as adding a new property. In this paper, we focus on long-term schema evolution, where chains of pending schema evolution operations may have to be applied. Chains occur when legacy entities written several application releases back are finally accessed by the application. We discuss strategies for dealing with chains of evolution operations, in particular, the composition into a single, equivalent composite migration that performs the required version jump. Our experiments with MongoDB focus on scalable implementation strategies. Our lineup further compares the number of write operations, and thus, the operational costs of different data migration strategies. KW - Big data KW - Context KW - Data Migration Strategies KW - Data models KW - Databases KW - Incremental Migration KW - Lazy Composite Migration KW - Lazy Migration KW - NoSQL databases KW - Predictive Migration KW - Production KW - Runtime KW - schema evolution KW - Software Y1 - 2016 U6 - https://doi.org/10.1109/BigData.2016.7840924 SP - 2764 EP - 2774 PB - IEEE ER - TY - CHAP A1 - Heckner, Markus A1 - Bazo, Alexander A1 - Wolff, Christian A1 - Scherzinger, Stefanie T1 - Karel relearns c: teaching good software engineering practices in cs1 with karel the robot T2 - 2018 IEEE Global Engineering Education Conference (EDUCON), 17-20 April 2018, Santa Cruz de Tenerife, Spain N2 - This paper describes our implementation, teaching philosophy, and experiences with our C-based version of the widely known Karel the Robot introductory programming micro-language. Karel enables students to programmatically solve problems, using the C language, in a graphical two-dimensional world by moving the robot around while checking and manipulating its surroundings. We use Karel to solve the dilemma of either demanding too much or not enough from students during the first weeks of an introductory CS course, as interesting problems can be solved with limited input from lectures. Karel enables problem solving from day one of CS1, and encourages good software engineering practices such as top-down design from the beginning. We outline typical problems in the first weeks of CS1. We present a short overview of existing Karel implementations in various programming languages and our rationale for re-implementing Karel. We present our teaching philosophy and use of Karel in the classroom. We demonstrate how Karel is being used from a student perspective, along with a typical programming task. We discuss preliminary results of a survey and interviews with students from a first course in which Karel was used. KW - Computer languages KW - Education KW - Problem-solving KW - Programming profession KW - Robots KW - Writing Y1 - 2018 U6 - https://doi.org/10.1109/EDUCON.2018.8363402 SP - 1447 EP - 1454 PB - IEEE ER - TY - CHAP A1 - Scherzinger, Stefanie A1 - Cerqueus, Thomas A1 - Cunha de Almeida, Eduardo T1 - ControVol: A framework for controlled schema evolution in NoSQL application development T2 - 2015 IEEE 31st International Conference on Data Engineering, 13-17 April 2015, Seoul, Korea (South) N2 - Building scalable web applications on top of NoSQL data stores is becoming common practice. Many of these data stores can easily be accessed programmatically, and do not enforce a schema. Software engineers can design the data model on the go, a flexibility that is crucial in agile software development. The typical tasks of database schema management are now handled within the application code, usually involving object mapper libraries. However, today’s Integrated Development Environments (IDEs) lack the proper tool support when it comes to managing the combined evolution of the application code and of the schema. Yet simple refactorings such as renaming an attribute at the source code level can cause irretrievable data loss or runtime errors once the application is serving in production. In this demo, we present ControVol, a framework for controlled schema evolution in application development against NoSQL data stores. ControVol is integrated into the IDE and statically type checks object mapper class declarations against the schema evolution history, as recorded by the code repository. ControVol is capable of warning of common yet risky cases of mismatched data and schema. ControVol is further able to suggest quick fixes by which developers can have these issues automatically resolved. KW - Databases KW - Google KW - history KW - Java KW - Production KW - Runtime KW - Software Y1 - 2015 SN - 978-1-4799-7964-6 U6 - https://doi.org/10.1109/ICDE.2015.7113402 SP - 1464 EP - 1467 PB - IEEE ER - TY - CHAP A1 - Störl, Uta A1 - Tekleab, Alexander A1 - Klettke, Meike A1 - Scherzinger, Stefanie A1 - Storl, Uta T1 - In for a Surprise When Migrating NoSQL Data T2 - 2018 IEEE 34th International Conference on Data Engineering (ICDE), 16-19 April 2018, Paris, France N2 - Schema-flexible NoSQL data stores lend themselves nicely for storing versioned data, a product of schema evolution. In this lightning talk, we apply pending schema changes to records that have been persisted several schema versions back. We present first experiments with MongoDB and Cassandra, where we explore the trade-off between applying chains of pending changes stepwise (one after the other), and as composite operations. Contrary to intuition, composite migration is not necessarily faster. The culprit is the computational overhead for deriving the compositions. However, caching composition formulae achieves a speed up: For Cassandra, we can cut the runtime by nearly 80%. Surprisingly, the relative speedup seems to be system-dependent. Our take away message is that in applying pending schema changes in NoSQL data stores, we need to base our design decisions on experimental evidence rather than on intuition alone. KW - composite migration KW - Conferences KW - Data engineering KW - data migration KW - Indexes KW - Lightning KW - NoSQL databases KW - Runtime KW - schema evolution KW - Tools Y1 - 2018 U6 - https://doi.org/10.1109/ICDE.2018.00202 SP - 1662 PB - IEEE ER - TY - CHAP A1 - Störl, Uta A1 - Müller, Daniel A1 - Klettke, Meike A1 - Scherzinger, Stefanie ED - Mitschang, Bernhard T1 - Enabling Efficient Agile Software Development of NoSQL-backed Applications T2 - Datenbanksysteme für Business, Technologie und Web (BTW 2017) : 17. Fachtagung des GI-Fachbereichs "Datenbanken und Informationssyteme" (DBIS) : 06.-10.03.2017 in Stuttgart Deutschland N2 - NoSQL databases are popular in agile software development, where a frequently changing database schema imposes challenges for the production database. In this demo, we present Darwin, a middleware for systematic, tool-based support specifically designed for NoSQL database systems. Darwin carries out schema evolution and data migration tasks. To the best of our knowledge, Darwin is the first tool of its kind that supports both eager and lazy NoSQL data migration. Y1 - 2017 SN - 978-3-88579-659-6 PB - Gesellschaft für Informatik e.V. (GI) CY - Bonn ER - TY - CHAP A1 - Scherzinger, Stefanie A1 - de Almeida, Eduardo Cunha A1 - Ickert, Felipe A1 - Del Fabro, Marcos Didonet T1 - On the necessity of model checking NoSQL database schemas when building SaaS applications T2 - Proceedings of the 2013 International Workshop on Testing the Cloud - TTC 2013 N2 - The design of the NoSQL schema has a direct impact on the scalability of web applications. Especially for developers with little experience in NoSQL stores, the risks inherent in poor schema design can be incalculable. Worse yet, the issues will only manifest once the application has been deployed, and the growing user base causes highly concurrent writes. In this paper, we present a model checking approach to reveal scalability bottlenecks in NoSQL schemas. Our approach draws on formal methods from tree automata theory to perform a conservative static analysis on both the schema and the expected write-behavior of users. We demonstrate the impact of schema-inherent bottlenecks for a popular NoSQL store, and show how concurrent writes can ultimately lead to a considerable share of failed transactions. Y1 - 2013 U6 - https://doi.org/10.1145/2489295.2489297 SP - 1 EP - 6 PB - ACM CY - New York, NY ER - TY - CHAP A1 - Filho, Edson Ramiro Lucas A1 - de Almeida, Eduardo Cunha A1 - Scherzinger, Stefanie T1 - Don’t Tune Twice: Reusing Tuning Setups for SQL-on-Hadoop Queries T2 - Conceptual Modeling : 38th International Conference, ER 2019, Salvador, Brazil, November 4-7, 2019, Proceedings N2 - SQL-on-Hadoop processing engines have become state-of-the-art in data lake analysis. However, the skills required to tune such systems are rare. This has inspired automated tuning advisors which profile the query workload and produce tuning setups for the low-level MapReduce jobs. Yet with highly dynamic query workloads, repeated re-tuning costs time and money in IaaS environments. In this paper, we focus on reducing the costs for up-front tuning. At the heart of our approach is the observation that a SQL query is compiled into a query plan of MapReduce jobs. While the plans differ from query to query, single jobs tend to be similar between queries. We introduce the notion of the code signature of a MapReduce job and, based on this, our concept of job similarity. We show that we can effectively recycle tuning setups from similar MapReduce jobs already profiled. In doing so, we can leverage any third-party tuning adviser for MapReduce engines. We are able to show that by recycling tuning setups, we can reduce the time spent on profiling by 50% in the TPC-H benchmark. Y1 - 2019 U6 - https://doi.org/10.1007/978-3-030-33223-5_9 SP - 93 EP - 107 PB - Springer CY - Cham ER - TY - JOUR A1 - Scherzinger, Stefanie T1 - Build your own SQL-on-Hadoop Query Engine A Report on a Term Project in a Master-level Database Course JF - ACM SIGMOD Record N2 - This is a report on a course taught at OTH Regensburg in the summer term of 2018. The students in this course built their own SQL-on-Hadoop engine as a term project in just 8 weeks. miniHive is written in Python and compiles SQL queries into MapReduce workflows. These are then executed on Hadoop. miniHive performs generic query optimizations (selection and projection pushdown, or cost-based join reordering), as well as MapReduce-specific optimizations. The course was taught in English, using a flipped classroom model. The course material was mainly compiled from third-party teaching videos. This report describes the course setup, the miniHive milestones, and gives a short review of the most successful student projects. Y1 - 2019 U6 - https://doi.org/10.1145/3377330.3377336 VL - 48 IS - 2 SP - 33 EP - 38 PB - ACM ER - TY - CHAP A1 - Pilven, Matthieu A1 - Scherzinger, Stefanie A1 - d’Orazio, Laurent ED - Guizzardi, Giancarlo ED - Gailly, Frederik ED - Suzana Pitangueira Maciel, Rita T1 - On Complex Value Relations in Hive T2 - Advances in Conceptual Modeling, ER 2020 workshops CMAI, CMLS, CMOMM4FAIR, CoMoNoS, EmpER, Vienna, Austria, November 3-6, 2020, Proceedings N2 - In this paper, we raise the question how data architects model their data for processing in Apache Hive. This well-known SQL-on-Hadoop engine supports complex value relations, where attribute types need not be atomic. In fact, this feature seems to be one of the prominent selling points, e.g., in Hive reference books. In an empirical study, we analyze Hive schemas in open source repositories. We examine to which extent practitioners make use of complex value relations and accordingly, whether they write queries over complex types. Understanding which features are actively used will help make the right decisions in setting up benchmarks for SQL-on-Hadoop engines, as well as in choosing which query operators to optimize for. KW - Complex value relations KW - Empirical study KW - Hive Y1 - 2019 SN - 978-3-030-34145-9 U6 - https://doi.org/10.1007/978-3-030-34146-6_13 SP - 146 EP - 156 PB - Springer International Publishing CY - Cham ER - TY - CHAP A1 - Maiwald, Benjamin A1 - Riedle, Benjamin A1 - Scherzinger, Stefanie ED - Guizzardi, Giancarlo ED - Gailly, Frederik ED - Suzana Pitangueira Maciel, Rita T1 - What Are Real JSON Schemas Like? T2 - Advances in Conceptual Modeling N2 - Recently, the semantics of the JSON Schema format, a de-facto standard for JSON schema declarations, has been formalized. It turns out that JSON Schema is a surprisingly complex schema language based on an open document semantics. In this paper, we present a first empirical analysis of a curated collection of real-world JSON Schemas. Knowing what real JSON Schemas are like (to borrow from a title of a related study on DTDs) helps practitioners and researchers in making realistic assumptions when building tools for JSON Schema processing. Y1 - 2019 SN - 978-3-030-34145-9 U6 - https://doi.org/10.1007/978-3-030-34146-6_9 VL - 11787 SP - 95 EP - 105 PB - Springer International Publishing CY - Cham ER - TY - JOUR A1 - Klettke, Meike A1 - Scherzinger, Stefanie A1 - Störl, Uta T1 - Datenbanken ohne Schema? JF - Datenbank-Spektrum N2 - In der Entwicklung von interaktiven Web-Anwendungen sind NoSQL-Datenbanksysteme zunehmend beliebt, nicht zuletzt, weil sie flexible Datenmodelle erlauben. Das erleichtert insbesondere ein agiles Projektmanagement, das sich durch häufige Releases und entsprechend häufige Änderungen am Datenmodell auszeichnet. In diesem Artikel geben wir einen Überblick über die besonderen Herausforderungen der agilen Anwendungsentwicklung gegen schemalose NoSQL-Datenbanksysteme. Wir stellen Strategien für die Schema-Evolution aus der Praxis vor, und postulieren unsere Vision einer eigenen Schema-Management-Komponente für NoSQL-Datenbanksysteme, die für eine kontinuierliche und systematische Schema-Evolution ausgelegt ist. KW - NoSQL-Datenbanksysteme KW - Schema-Evolution KW - Schema-Informationen Y1 - 2014 U6 - https://doi.org/10.1007/s13222-014-0156-z VL - 14 IS - 2 SP - 119 EP - 129 PB - Springer ER - TY - JOUR A1 - Scherzinger, Stefanie A1 - Thor, Andreas T1 - Cloud-Technologien in der Hochschullehre – Pflicht oder Kür? JF - Datenbank-Spektrum N2 - Ein eigenes Themenheft zum Datenmanagement in der Cloud dient uns als Anlass, die Präsenz von Cloud-Themen in der akademischen Datenbanklehre zu erfassen. In diesem Artikel geben wir die Ergebnisse einer Umfrage innerhalb der Fachgruppe Datenbanksysteme durch den Arbeitskreis Datenmanagement in der Cloud wieder. Dozentinnen und Dozenten von über zwanzig Hochschulen nahmen an der Umfrage teil. Es zeigt sich deutlich, dass sich das Thema „Cloud“ in der Hochschullehre zunehmend etabliert, jedoch überwiegend als ergänzendes Angebot, und seltener in der grundständigen Lehre verankert. Wir fassen die Ergebnisse unserer Umfrage zusammen und wagen Deutungsversuche. KW - Cloud Computing KW - Hochschullehre Y1 - 2014 U6 - https://doi.org/10.1007/s13222-014-0161-2 VL - 14 IS - 2 SP - 131 EP - 134 PB - Springer Nature ER - TY - CHAP A1 - Cerqueus, Thomas A1 - Cunha de Almeida, Eduardo A1 - Scherzinger, Stefanie T1 - Safely Managing Data Variety in Big Data Software Development T2 - 2015 IEEE/ACM 1st International Workshop on Big Data Software Engineering, 23-23 May 2015, Florence, Italy N2 - We consider the task of building Big Data software systems, offered as software-as-a-service. These applications are commonly backed by NoSQL data stores that address the proverbial Vs of Big Data processing: NoSQL data stores can handle large volumes of data and many systems do not enforce a global schema, to account for structural variety in data. Thus, software engineers can design the data model on the go, a flexibility that is particularly crucial in agile software development. However, NoSQL data stores commonly do not yet account for the veracity of changes when it comes to changes in the structure of persisted data. Yet this is an inevitable consequence of agile software development. In most NoSQL-based application stacks, schema evolution is completely handled within the application code, usually involving object mapper libraries. Yet simple code refactorings, such as renaming a class attribute at the source code level, can cause data loss or runtime errors once the application has been deployed to production. We address this pain point by contributing type checking rules that we have implemented within an IDE plug in. Our plug in ControVol statically type checks the object mapper class declarations against the code release history. ControVol is thus capable of detecting common yet risky cases of mismatched data and schema, and can even suggest automatic fixes. KW - Big data KW - history KW - Java KW - Loading KW - NoSQL data stores KW - object mapping KW - Production KW - Runtime KW - schema evolution KW - Software KW - type checking Y1 - 2015 U6 - https://doi.org/10.1109/BIGDSE.2015.9 SP - 4 EP - 10 PB - IEEE ER - TY - CHAP A1 - Scherzinger, Stefanie A1 - Störl, Uta A1 - Klettke, Meike ED - Cheney, James ED - Neumann, Thomas T1 - A Datalog-based protocol for lazy data migration in agile NoSQL Application development T2 - Proceedings of the 15th Symposium on Database Programming Languages : SPLASH '15: Conference on Systems, Programming, Languages, and Applications: Software for Humanity, Pittsburgh PA USA, 27.10.2015 - 27.10.2015 N2 - We address a practical challenge in agile web development against NoSQL data stores: Upon a new release of the web application, entities already persisted in production no longer match the application code. Rather than migrating all legacy entities eagerly (prior to the release) and at the cost of application downtime, lazy data migration is a popular alternative: When a legacy entity is loaded by the application, all pending structural changes are applied. Yet correctly migrating legacy data from several releases back, involving more than one entity at-a-time, is not trivial. In this paper, we propose a holistic Datalog ¬non-rec model model for reading, writing, and migrating data. In implementing our model, we may blend established Datalog evaluation algorithms, such as an incremental evaluation with certain rules evaluated bottom-up, and certain rules evaluated top-down with sideways information passing. Our systematic approach guarantees that from the viewpoint of the application, it remains transparent whether data is migrated eagerly or lazily. Y1 - 2015 SN - 9781450339025 U6 - https://doi.org/10.1145/2815072.2815078 SP - 41 EP - 44 PB - ACM CY - New York, NY, USA ER - TY - CHAP A1 - Cerqueus, Thomas A1 - de Almeida, Eduardo Cunha A1 - Scherzinger, Stefanie ED - Gangemi, Aldo ED - Leonardi, Stefano ED - Panconesi, Alessandro T1 - ControVol: Let Yesterday's Data Catch Up with Today's Application Code T2 - Proceedings of the 24th International Conference on World Wide Web (WWW '15) ; Florence Italy, 18.05.2015 - 22.0.2015 N2 - In building software-as-a-service applications, a flexible development environment is key to shipping early and often. Therefore, schema-flexible data stores are becoming more and more popular. They can store data with heterogeneous structure, allowing for new releases to be pushed frequently, without having to migrate legacy data first. However, the current application code must continue to work with any legacy data that has already been persisted in production. To let legacy data structurally "catch up" with the latest application code, developers commonly employ object mapper libraries with life-cycle annotations. Yet when used without caution, they can cause runtime errors and even data loss. We present ControVol, an IDE plugin that detects evolutionary changes to the application code that are incompatible with legacy data. ControVol warns developers already at development time, and even suggests automatic fixes for lazily migrating legacy data when it is loaded into the application. Thus, ControVol ensures that the structure of legacy data can catch up with the structure expected by the latest software release. Y1 - 2015 SN - 9781450334730 U6 - https://doi.org/10.1145/2740908.2742719 SP - 15 EP - 16 PB - ACM CY - New York, NY, USA ER - TY - CHAP A1 - Störl, Uta A1 - Müller, Daniel A1 - Tekleab, Alexander A1 - Tolale, Stephane A1 - Stenzel, Julian A1 - Klettke, Meike A1 - Scherzinger, Stefanie A1 - Storl, Uta A1 - Muller, Daniel T1 - Curating Variational Data in Application Development T2 - 2018 IEEE 34th International Conference on Data Engineering, 16-19 April 2018, Paris, France N2 - Building applications for processing data lakes is a software engineering challenge. We present Darwin, a middleware for applications that operate on variational data. This concerns data with heterogeneous structure, usually stored within a schema-flexible NoSQL database. Darwin assists application developers in essential data and schema curation tasks: Upon request, Darwin extracts a schema description, discovers the history of schema versions, and proposes mappings between these versions. Users of Darwin may interactively choose which mappings are most realistic. Darwin is further capable of rewriting queries at runtime, to ensure that queries also comply with legacy data. Alternatively, Darwin can migrate legacy data to reduce the structural heterogeneity. Using Darwin, developers may thus evolve their data in sync with their code. In our hands-on demo, we curate synthetic as well as real-life datasets. KW - data migration KW - Data mining KW - Evolution (biology) KW - history KW - NoSQL databases KW - query rewriting KW - schema evolution KW - schema management KW - Software KW - Task analysis KW - variational data Y1 - 2018 U6 - https://doi.org/10.1109/ICDE.2018.00187 SP - 1605 EP - 1608 PB - IEEE ER - TY - CHAP A1 - Klettke, Meike A1 - Awolin, Hannes A1 - Störl, Uta A1 - Müller, Daniel A1 - Scherzinger, Stefanie A1 - Storl, Uta A1 - Muller, Daniel T1 - Uncovering the evolution history of data lakes T2 - 2017 IEEE International Conference on Big Data (Big Data),,11-14 Dec. 2017, Boston, MA, USA N2 - Data accumulating in data lakes can become inaccessible in the long run when its semantics are not available. The heterogeneity of data formats and the sheer volumes of data collections prohibit cleaning and unifying the data manually. Thus, tools for automated data lake analysis are of great interest. In this paper, we target the particular problem of reconstructing the schema evolution history from data lakes. Knowing how the data is structured, and how this structure has evolved over time, enables programmatic access to the lake. By deriving a sequence of schema versions, rather than a single schema, we take into account structural changes over time. Moreover, we address the challenge of detecting inclusion dependencies. This is a prerequisite for mapping between succeeding schema versions, and in particular, detecting nontrivial changes such as a property having been moved or copied. We evaluate our approach for detecting inclusion dependencies using the MovieLens dataset, as well an adaption of a dataset containing botanical descriptions, to cover specific edge cases. KW - Data mining KW - evolution operations KW - Grippers KW - history KW - inclusion dependencies KW - integrity constraints KW - Lakes KW - NoSQL databases KW - Protocols KW - schema version extraction Y1 - 2017 U6 - https://doi.org/10.1109/BigData.2017.8258204 SP - 2462 EP - 2471 PB - IEEE ER - TY - CHAP A1 - Ringlstetter, Andreas A1 - Scherzinger, Stefanie A1 - Bissyandé, Tegawendé F. T1 - Data Model Evolution Using Object-NoSQL Mappers: Folklore or State-of-the-Art? T2 - 2016 IEEE/ACM 2nd International Workshop on Big Data Software Engineering (BIGDSE), 16 May 2016, Austin, TX, USA N2 - In big data software engineering, the schema flexibility of NoSQL document stores is a major selling point: When the document store itself does not actively manage a schema, the data model is maintained within the application. Just like object-relational mappers for relational databases, object-NoSQL mappers are part of professional software development with NoSQL document stores. Some mappers go beyond merely loading and storing Java objects: Using dedicated evolution annotations, developers may conveniently add, remove, or rename attributes from stored objects, and also conduct more complex transformations. In this paper, we analyze the dissemination of this technology in Java open source projects. While we find evidence on GitHub that evolution annotations are indeed being used, developers do not employ them so much for evolving the data model, but to solve different tasks instead. Our observations trigger interesting questions for further research. KW - Big data KW - data model evolution KW - Data models KW - Java KW - Loading KW - Object-NoSQL mappers KW - Software KW - Software engineering KW - Transient analysis Y1 - 2016 U6 - https://doi.org/10.1145/2896825.2896827 SP - 33 EP - 36 PB - ACM ER - TY - JOUR A1 - Mauerer, Wolfgang A1 - Scherzinger, Stefanie T1 - Digitale Forschungswerkzeuge : Nachhaltigkeit für Software und Daten JF - Forschung & Lehre (Forschung und Lehre) N2 - Die wissenschaftliche Reproduktionskrise hat den Blick auf digitale Forschungswerkzeuge intensiviert. Auch wenn der Mehraufwand für Reproduzierbarkeit und Zugänglichkeit zunehmend anerkannt wird, existieren noch Defizite in der Umsetzung, wenn es darum geht, die Datenbasis und Forschungswerkzeuge verfügbar zu machen Y1 - 2021 UR - https://www.wissenschaftsmanagement-online.de/system/files/downloads-wimoarticle/2021-10_WIMO_Digitale_Forschungswerkzeuge_Mauerer_Scherzinger.pdf VL - 28 IS - 10 SP - 816 EP - 817 PB - Zentrum für Wissenschaftsmanagement e.V. (ZWM)
 ER - TY - JOUR A1 - Klettke, Meike A1 - Störl, Uta A1 - Scherzinger, Stefanie T1 - Herausforderungen bei der Anwendungsentwicklung mit schema-flexiblen NoSQL-Datenbanken JF - HMD Praxis der Wirtschaftsinformatik N2 - NoSQL-Datenbanksysteme sind in den letzten Jahren sehr populär geworden, gute Gründe sprechen für ihren Einsatz: Eine attraktive Eigenschaft vieler Systeme ist ihre Schema-Flexibilität, die insbesondere in der agilen Anwendungsentwicklung Vorteile bietet. Durch horizontale Skalierbarkeit ermöglichen NoSQL-Datenbanksysteme eine effiziente Verarbeitung großer Datenmengen. Einige Systeme, die für die Datenhaltung interaktiver Anwendungen konzipiert sind, können zudem hochfrequente Nutzeranfragen bedienen. Diesen Vorteilen stehen eine Reihe von Nachteilen gegenüber, aus denen sich neue Herausforderungen für die Anwendungsentwicklung ergeben: Fehlende Standards bei den Anfragesprachen erschweren die Entwicklung datenbanksystemunabhängiger Anwendungen. Schema-Flexibilität im Datenbankmanagementsystem führt dazu, dass die Verantwortung für das Schema-Management in die Anwendung verlagert wird. Im vorliegenden Beitrag werden wesentliche Herausforderungen identifiziert und Lösungsansätze aus Forschung und Praxis vorgestellt. Dabei liegt der Fokus auf schema-flexiblen NoSQL-Datenbanksystemen, mit einem aggregat-orientierten Datenmodell, d. h. Key-Value Datenbanksysteme, dokumentenorientierten Datenbanksystemen und Column-Family Datenbanksystemen. NoSQL data stores have become very popular over the last years, as good reasons are justifying their application: One attractive feature of many systems is their schema flexibility, which may be preferable in agile software development projects. Due to their horizontal scalability, NoSQL data stores make it possible to efficiently process large amounts of data. Some systems, designed as data backends for interactive applications, can also manage highly frequent user requests. Apart from these advantages, there are also downsides to NoSQL data stores that create new challenges for software development: Missing standards in query languages make it difficult to build data store independent applications. Schema flexibility in the data store shifts the responsibility for schema management into the application. This article identifies substantial challenges as well as solution statements from research and practice. The focus of our survey is on schema-flexible NoSQL data management systems with an aggregate-oriented data model, i. e., key-value data management systems, as well as document and column family data management systems. KW - Abfragesprache KW - Applikationsentwicklung KW - Datenbank KW - Datenbanksystem KW - Datenmodell KW - Datensystem KW - horizontale Skalierbarkeit KW - Kolonnendaten KW - Managementsystem KW - Software-Entwicklung Y1 - 2016 U6 - https://doi.org/10.1365/s40702-016-0234-9 VL - 53 IS - 4 SP - 428 EP - 442 PB - Springer ER - TY - CHAP A1 - Möller, Mark Lukas A1 - Berton, Nicolas A1 - Klettke, Meike A1 - Scherzinger, Stefanie A1 - Störl, Uta T1 - jHound: Large-Scale Profiling of Open JSON Data T2 - Datenbanksysteme für Business, Technologie und Web (BTW 2019), 18. Fachtagung des GI-Fachbereichs "Datenbanken und Informationssysteme" (DBIS) : 4.-8. März 2019 in Rostock N2 - We present jHound, a tool for profiling large collections of JSON data, and apply it to thousands of data sets holding open government data. jHound reports key characteristics of JSON documents, such as their nesting depth. As we show, jHound can help detect structural outliers, and most importantly, badly encoded documents: jHound can pinpoint certain cases of documents that use string-typed values where other native JSON datatypes would have been a better match. Moreover, we can detect certain cases of maladaptively structured JSON documents, which obviously do not comply with good data modeling practices. By interactively exploring particular example documents, we hope to inspire discussions in the community about what makes a good JSON encoding. KW - Datenerhebung KW - Datenformat KW - Datenmodell KW - Eigenschaftskennwert KW - Hauptspeicher KW - Histogramm KW - Zeitüberwachung Y1 - 2018 UR - https://btw.informatik.uni-rostock.de/index.php/de/tagungsbaende/send/3-tagungsbaende/tagungsband.pdf SN - 978-3-88579-683-1 VL - 289 SP - 557 EP - 560 PB - GI - Gesellschaft für Informatik CY - Bonn ER - TY - CHAP A1 - Scherzinger, Stefanie A1 - Sombach, Stephanie A1 - Wiech, Katharina A1 - Klettke, Meike A1 - Störl, Uta T1 - Datalution: a tool for continuous schema evolution in NoSQL-backed web applications T2 - QUDOS 2016: Proceedings of the 2nd International Workshop on Quality-Aware DevOps N2 - When an incremental release of a web application is deployed, the structure of data already persisted in the production database may no longer match what the application code expects. Traditionally, eager schema migration is called for, where all legacy data is migrated in one go. With the growing popularity of schema-flexible NoSQL data stores, lazy forms of data migration have emerged: Legacy entities are migrated on-the-fly, one at-a-time, when they are loaded by the application. In this demo, we present Datalution, a tool demonstrating the merits of lazy data migration. Datalution can apply chains of pending schema changes, due to its Datalog-based internal representation. The Datalution approach thus ensures that schema evolution, as part of continous deployment, is carried out correctly. Y1 - 2016 U6 - https://doi.org/10.1145/2945408.2945416 SP - 38 EP - 39 PB - ACM ER - TY - GEN A1 - Schönberger, Manuel A1 - Scherzinger, Stefanie A1 - Mauerer, Wolfgang T1 - Quantum Computing for DB - Applicability on Multi Query Optimization and Join Order Optimization T2 - Frühjahrstreffen Fachgruppe Datenbanken in Potsdam, 2022 Y1 - UR - https://www.lfdr.de/Publications/2022/FGDB_Poster_Schoenberger.pdf ER - TY - CHAP A1 - Mauerer, Wolfgang A1 - Klessinger, Stefan A1 - Scherzinger, Stefanie T1 - Beyond the badge: reproducibility engineering as a lifetime skill T2 - Proceedings 4th International Workshop on Software Engineering Education for the Next Generation SEENG 2022, 17 May 2022, Pittsburgh, PA, USA N2 - Ascertaining reproducibility of scientific experiments is receiving increased attention across disciplines. We argue that the necessary skills are important beyond pure scientific utility, and that they should be taught as part of software engineering (SWE) education. They serve a dual purpose: Apart from acquiring the coveted badges assigned to reproducible research, reproducibility engineering is a lifetime skill for a professional industrial career in computer science. SWE curricula seem an ideal fit for conveying such capabilities, yet they require some extensions, especially given that even at flagship conferences like ICSE, only slightly more than one-third of the technical papers (at the 2021 edition) receive recognition for artefact reusability. Knowledge and capabilities in setting up engineering environments that allow for reproducing artefacts and results over decades (a standard requirement in many traditional engineering disciplines), writing semi-literate commit messages that document crucial steps of a decision-making process and that are tightly coupled with code, or sustainably taming dynamic, quickly changing software dependencies, to name a few: They all contribute to solving the scientific reproducibility crisis, and enable software engineers to build sustainable, long-term maintainable, software-intensive, industrial systems. We propose to teach these skills at the undergraduate level, on par with traditional SWE topics. KW - reproducibility engineering KW - teaching software engineering Y1 - 2022 SN - 9781450393362 U6 - https://doi.org/10.1145/3528231.3528359 N1 - Preprint unter: https://doi.org/10.48550/arXiv.2203.05283 SP - 1 EP - 4 PB - ACM CY - New York, NY, USA ER - TY - JOUR A1 - Thor, Andreas A1 - Scherzinger, Stefanie A1 - Specht, Günther T1 - Editorial JF - Datenbank-Spektrum Y1 - 2014 U6 - https://doi.org/10.1007/s13222-014-0162-1 VL - 14 IS - 2 SP - 81 EP - 84 PB - Springer ER - TY - JOUR A1 - Broser, Christian A1 - Falter, Thomas A1 - Ławrowski, Robert Damian A1 - Altenbuchner, Amelie A1 - Vögele, Daniel A1 - Koss, Claus A1 - Schlampp, Matthias A1 - Dunnweber, Jan A1 - Steffens, Oliver A1 - Heckner, Markus A1 - Jaritz, Sabine A1 - Schiegl, Thomas A1 - Corsten, Sabine A1 - Lauer, Norina A1 - Guertler, Katherine A1 - Koenig, Eric A1 - Haug, Sonja A1 - Huber, Dominik A1 - Birkenmaier, Clemens A1 - Krenkel, Lars A1 - Wagner, Thomas A1 - Justus, Xenia A1 - Saßmannshausen, Sean Patrick A1 - Kleine, Nadine A1 - Weber, Karsten A1 - Braun, Carina N. A1 - Giacoppo, Giuliano A1 - Heinrich, Michael A1 - Just, Tobias A1 - Schreck, Thomas A1 - Schnabl, Andreas A1 - Gilmore, Amador Téran A1 - Roeslin, Samuel A1 - Schmid, Sandra A1 - Wellnitz, Felix A1 - Malz, Sebastian A1 - Maurial, Andreas A1 - Hauser, Florian A1 - Mottok, Jürgen A1 - Klettke, Meike A1 - Scherzinger, Stefanie A1 - Störl, Uta A1 - Heckner, Markus A1 - Bazo, Alexander A1 - Wolff, Christian A1 - Kopper, Andreas A1 - Westner, Markus A1 - Pongratz, Christian A1 - Ehrlich, Ingo A1 - Briem, Ulrich A1 - Hederer, Sebastian A1 - Wagner, Marcus A1 - Schillinger, Moritz A1 - Görlach, Julien A1 - Hierl, Stefan A1 - Siegl, Marco A1 - Langer, Christoph A1 - Hausladen, Matthias A1 - Schreiner, Rupert A1 - Haslbeck, Matthias A1 - Kreuzer, Reinhard A1 - Brückl, Oliver A1 - Dawoud, Belal A1 - Rabl, Hans-Peter A1 - Gamisch, Bernd A1 - Schmidt, Ottfried A1 - Heberl, Michael A1 - Gänsbauer, Bianca A1 - Bick, Werner A1 - Ellermeier, Andreas A1 - Monkman, Gareth J. A1 - Prem, Nina A1 - Sindersberger, Dirk A1 - Tschurtschenthaler, Karl A1 - Aurbach, Maximilian A1 - Dendorfer, Sebastian A1 - Betz, Michael A. A1 - Szecsey, Tamara A1 - Mauerer, Wolfgang A1 - Murr, Florian ED - Baier, Wolfgang T1 - Forschung 2018 T3 - Forschungsberichte der OTH Regensburg - 2018 KW - Forschung KW - Forschungsbericht Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-13826 SN - 978-3-9818209-5-9 CY - Regensburg ER -