Refine
Year of publication
Document Type
Is part of the Bibliography
- no (45)
Keywords
- schema evolution (6)
- NoSQL databases (5)
- Software (5)
- Data models (4)
- Runtime (4)
- history (4)
- Big data (3)
- Java (3)
- Production (3)
- Computer Science (2)
Institute
Begutachtungsstatus
- peer-reviewed (27)
The prospect of achieving computational speedups by exploiting quantum phenomena makes the use of quantum processing units (QPUs) attractive for many algorithmic database problems. Query optimisation, which concerns problems that typically need to explore large search spaces, seems like an ideal match for the known quantum algorithms. We present the first quantum implementation of join ordering, which is one of the most investigated and fundamental query optimisation problems, based on a reformulation to quadratic binary unconstrained optimisation problems. We empirically characterise our method on two state-of-the-art approaches (gate-based quantum computing and quantum annealing), and identify speed-ups compared to the best know classical join ordering approaches for input sizes that can be processed with current quantum annealers. However, we also confirm that limits of early-stage technology are quickly reached.
Current QPUs are classified as noisy, intermediate scale quantum computers (NISQ), and are restricted by a variety of limitations that reduce their capabilities as compared to ideal future quantum computers, which prevents us from scaling up problem dimensions and reaching practical utility. To overcome these challenges, our formulation accounts for specific QPU properties and limitations, and allows us to trade between achievable solution quality and possible problem size.
In contrast to all prior work on quantum computing for query optimisation and database-related challenges, we go beyond currently available QPUs, and explicitly target the scalability limitations: Using insights gained from numerical simulations and our experimental analysis, we identify key criteria for co-designing QPUs to improve their usefulness for join ordering, and show how even relatively minor physical architectural improvements can result in substantial enhancements. Finally, we outline a path towards practical utility of custom-designed QPUs.
Forschungsbericht 2013
(2014)
Forschungsbericht 2017
(2017)
Machine learning experts prefer to think of their input as a single, homogeneous, and consistent data set. However, when analyzing large volumes of data, the entire data set may not be manageable on a single server, but must be stored on a distributed file system instead. Moreover, with the pressing demand to deliver explainable models, the experts may no longer focus on the machine learning algorithms in isolation, but must take into account the distributed nature of the data stored, as well as the impact of any data pre-processing steps upstream in their data analysis pipeline. In this paper, we make the point that even basic transformations during data preparation can impact the model learned, and that this is exacerbated in a distributed setting. We then sketch our vision of end-to-end explainability of the model learned, taking the pre-processing into account. In particular, we point out the potentials of linking the contributions of research on data provenance with the efforts on explainability in machine learning. In doing so, we highlight pitfalls we may experience in a distributed system on the way to generating more holistic explanations for our machine learning models.
We demonstrate MigCast, a tool-based advisor for exploring data migration strategies in the context of developing NoSQL-backed applications. Users of MigCast can consider their options for evolving their data model along with legacy data already persisted in the cloud-hosted production data-base. They can explore alternative actions as the financial costs are predicted respective to the cloud provider chosen. Thereby they are better equipped to assess potential consequences of imminent data migration decisions. To this end, MigCast maintains an internal cost model, taking into account characteristics of the data instance, expected work-load, data model changes, and cloud provider pricing models. Hence, MigCast enables software project stakeholders to remain in control of the operative costs and to make informed decisions evolving their applications.
Machine learning (ML) based decision making is becoming commonplace. For persons affected by ML-based decisions, a certain level of transparency regarding the properties of the underlying ML model can be fundamental. In this vision paper, we propose to issue consumer labels for trained and published ML models. These labels primarily target machine learning lay persons, such as the operators of an ML system, the executors of decisions, and the decision subjects themselves. Provided that consumer labels comprehensively capture the characteristics of the trained ML model, consumers are enabled to recognize when human intelligence should supersede artificial intelligence. In the long run, we envision a service that generates these consumer labels (semi-)automatically. In this paper, we survey the requirements that an ML system should meet, and correspondingly, the properties that an ML consumer label could capture. We further discuss the feasibility of operationalizing and benchmarking these requirements in the automated generation of ML consumer labels.
A new vision in semantic big data processing is to create enterprise data hubs, with a 360° view on all data that matters to a corporation. As we discuss in this paper, a new generation of multi-model database systems seems a promising architectural choice for building such scalable, non-native triple stores. In this paper, we first characterize this new generation of multi-model databases. Then, discussing an example scenario, we show how they allow for agile and flexible schema management, spanning a large design space for creative and incremental data modelling. We identify the challenge of generating sound triple-views from data stored in several, interlinked models, for SPARQL querying. We regard this as one of several appealing research challenges where the semantic big data and the database architecture community may join forces.