FG Datenbanken und Informationssysteme
Refine
Document Type
- Doctoral thesis (12)
- Report (2)
Has Fulltext
- yes (14)
Is part of the Bibliography
- no (14)
Year of publication
Keywords
- CQQL (2)
- Datenbankentwurf (2)
- Generierung (2)
- Informationssystem (2)
- Komponente <Software> (2)
- Konzeptionelle Modellierung (2)
- Mensch-Maschine-Kommunikation (2)
- Quantenlogik (2)
- Abfragesprache (1)
- Abstammungsformel (1)
Institute
Gegeben sei für ein binäres Klassifikationsproblem ein künstliches, neuronales Netzwerk ann bestehend aus ReLUKnoten und linearen Schichten (convolution, pooling, fully connected). Das Netzwerk ann sei mit hinreichender Genauigkeit an Hand von Trainingsdaten trainiert. Wir werden zeigen, dass ein solches Netzwerk in verschiedene Partitionen des Eingaberaums zerlegt werden kann, wobei jede Partition eine lineare Abbildung der Eingabewerte auf einen klassifizierenden Ausgabewert repräsentiert. Im Weiteren gehen wir von einem einfachen Netzwerk ann aus, bei dem die Eingangswerte Mintermen von Attributwerten entsprechen. Einfach ist ein Netzwerk, wenn es für eine geringe Anzahl von Attributen trainiert wurde und die Anzahl der ReLU-Knoten ebenfalls gering ist. In der Arbeit wird gezeigt, dass jede lineare Partition durch einen CQQL-Ausdruck beschrieben werden kann. Ein CQQL-Ausdruck lässt sich mit Hilfe von Quantenlogik-inspirierten Entscheidungsbäumen beschreiben.
People in a social network are connected, and their homogeneity is reflected by the similarity of their attributes. For effective clustering, the similarities among people within a cluster must be much higher than the similarities between different clusters. Traditional clustering algorithms like hierarchical (agglomerative) or k-medoids take distances between objects as input and find clusters of objects. The distance functions used should comply with the triangle inequality (TI) property, but sometimes this property may be violated, thus negatively impacting the quality of the generated clusters.
However, in social networks, meaningful clusters can be found even though TI violates. One possibility is a quantum-logic-based clique-guided non-TI clustering approach. The commuting quantum query language (CQQL) is the base for this approach. The CQQL allows the formulation of queries that incorporate both Boolean and similarity conditions. It calculates the similarity value between two objects.
Furthermore, attributes may not have equal impact on similarity and affect the resulting clusters. CQQL incorporates weights to express the varying importance of sub-conditions in a query while preserving consistency with Boolean algebra. This enables personalization of results through relevance feedback (RF).
The main challenge of comparing clusterings is that there is no ground truth data. In such situations, human-generated gold standard clustering can be used. The question is how to compare the performance of clusterings. A noteworthy technique involves counting the pairs of objects that are grouped identically in both clusterings. By doing so, a clustering distance is calculated that measures the dissimilarity.
To validate the non-TI clustering approach, experiments are conducted on social networks of different sizes. Three central questions are addressed by the experiments mentioned: first, is it possible to find meaningful clusters even though TI violates; second, how does a user interact with the system to provide feedback based on their needs; and third, how fast do the detected clusters based on the proposed approach converge to the ideal solution?
To sum up, the experiments’ objective is to demonstrate the validity of a theoretical approach. The research findings presented here provide sufficient evidence for detecting meaningful clusters based on user interaction. Furthermore, the experiments clearly demonstrate that the non-TI clustering approach can be used as an RF technique in clustering.
Convolutional neural networks are often successfully used for classification problems. Usually, a huge number of weights need to be learnt by use of training data. However, the learnt weights give no insight how the cnn really works. Thus, a cnn can be seen as a black box solution. In our approach we develop a method to generate a commuting quantum query language (cqql) condition from a sample derived from a given cnn or from training input. CQQL is inspired by quantum logic and its conditions obey the rules of Boolean algebra. The evaluation of a cqql condition provides values from the unit interval [0; 1] and establishes therefore an elegant bridge between logic and a cnn. The underlying assumption is that a condition (a logic expression) gives much more understanding than pure cnn weights. Furthermore, the rich theory of Boolean algebra can be used for manipulating logic expressions. After extracting a cqql condition from a cnn or its training data we can use logic as a way to predict classes alternatively to a cnn.
In contrast to traditional data applications, many real-world scenarios nowadays depend on managing and querying huge volumes of uncertain and incomplete data. This new type of applications emerge, for example, when we integrate data from various sources, analyse social/biological/chemical networks or conduct privacy-preserving data mining.
A very promising concept addressing this new kind of probabilistic data applications has been proposed in the form of probabilistic databases. Here, a tuple only belongs to its table or query answer with a specific likelihood. That probability expresses the uncertainty about the given data or the confidence in the answer. The most challenging task for probabilistic databases is query evaluation. In fact, there are even simple relational queries for which determining the occurrence probability of a single answer tuple is hard for #P.
Lineage formulas constitute the central concept under investigation in this work. In short, the mechanism behind lineage formulas facilitates the representation and evaluation of events of the probability space, which is defined by a probabilistic database. On the basis of lineage formulas, we devise a framework that is designed as a combination of a relational database layer and an additional probabilistic query engine.
In particular, the following three aspects are studied:
(i) an efficient construction of lineage formulas,
(ii) an orthogonal combination of lineage optimization techniques, which are performed within the relational database layer and the probabilistic query engine, and
(iii) effective and compact data structures to represent lineage formulas within a probabilistic query engine.
The developed framework provides a novel lineage construction method that is able to construct nested lineage formulas, to avoid large tuple sets within the relational database layer tuples, and to provide full relational algebra support. In addition, the proposed system completely resolves the conflict between the contradicting query plans optimized for the relational database layer and the probabilistic query engine.
The search for textual information, e.g., in the form of webpages, is a typical task in modern business and private life. From a user's point of view, the commonly used systems have matured and established common interaction design patterns such as the textual input box that starts virtually every directed search process.
In comparison, the search for multimedia documents (e.g., images or videos) is still in its early years. In other words, a pre-dominant search strategy has not yet evolved. That is, directed and exploratory search approaches fight for user acceptance.
One further discriminative factor of multimedia information retrieval (MMIR) from traditional text-based information retrieval (IR) is that multimedia documents are not necessarily stored with the help of the same data access paradigm.
From a technical point of view, the use of different data access paradigms complicates the retrieval from such collections because the utilized retrieval model has to support these paradigms.
As a consequence, the main challenges in MMIR - the retrieval engine and the user interaction -- have to be addressed in a holistic way. A holistic theoretic perspective on MMIR/IR research is taken by principle of polyrepresentation (PoP), which forms one half of the theoretic background of this dissertation aiming at the development of a preference-based approach to interactive MMIR. Roughly speaking, the PoP theorizes that representations describing a document are based on various cognitive processes dealing with it, e.g., a title, its color or shape features, its creator, or its date of creation. This multitude of representations can be fused to form a conjunctive cognitive overlap (CO) in which highly relevant documents are likely to be contained. This explicit recommendation discriminates the PoP from typical feature fusion approaches often used in MMIR.
However, the PoP does not answer how a retrieval model has to be implemented in a technical sense which is of interest in the field of computer science. One possibility to implement the PoP are quantum mechanics-inspired IR models such as the commuting quantum query language (CQQL) which is used in this thesis.
CQQL is particularly interesting because it integrates data access paradigms used in the fields of DB and IR. In order to respect the dynamic nature of the search process and information need (IN), CQQL allows the personalization of retrieval results using a preference-based relevance feedback (RF) approach called PrefCQQL, which relies on machine-based learning.
Unique features of the PrefCQQL approach range from the support of negative query-by-example (QBE) documents at query formulation time as well as during the interactive retrieval process to the formulation of weak preferences between result documents to express gradual levels of relevance. In addition, inductive preferences can be used from query formulation time onward to learn new CQQL queries.
In order to evaluate the presented polyrepresentative PrefCQQL approach, two kinds of experiments are conducted: a Cranfield-inspired evaluation of CQQL/PrefCQQL's retrieval effectiveness, which is extended by the utilization of user simulations to better fit the requirements of the evaluation of an adaptive IR system, and a usability study that examines three alternative MMIR system UI prototypes. In order to increase the reproducibility and confirmability of the experiments, the source code to all used programs is made available as a supplement to this dissertation.
The mentioned experiments aim at answering two central questions: first, whether the hypotheses of the PoP can be verified in MMIR, and second, whether a usable interactive MMIR system can be built on the basis of the PoP and PrefCQQL?
To answer the first question, different matching functions that partly follow the recommendations of the PoP are evaluated with six different test collections in both an non-interactive and interactive QBE scenario. The results of this experiment are ambivalent.
In non-interactive MMIR, the experimental data does not provide sufficient justification for the statement that PoP-based matching functions will always surpass single features or other matching functions. For instance, the arithmetic mean, which calculates the average similarity between a query's representations and the documents' representations in the collection, surpasses the conjunction and hence the CO of multiple representations in terms of retrieval effectiveness. Nevertheless, the matching function following the PoP is effectiveness stabler than the best performing single representations per collection. Hence, the CO's retrieval performance is more reliable than the usage of single representations.
In contrast, the predictions of the PoP can be verified in the investigated PrefCQQL-based interactive MMIR scenario. However, it is important to note that also the number of available representations has an impact on the retrieval outcome. That is, if too few representations are present in a matching function, the corresponding IN model in PrefCQQL obviously becomes subject to underfitting eventually lowering its retrieval effectiveness. Unfortunately, when the point of sufficient representations to support PrefCQQL is reached could not be revealed in this dissertation.
The second question is answered with the help of a prototypical MMIR system: the Pythia system, which serves both as proof of concept of the CQQL and the PrefCQQL approach. Furthermore, the system supports different information seeking strategies and a seamless transition between them in order to support users with different kinds of IN.
Mobile cyber-physical systems (MCPSs) such as motor vehicles, railed vehicles, aircraft, or spacecraft are commonly used in our life today. These systems are location-independent and embedded in a physical environment which is usually harsh and uncertain. MCPSs are equipped with a wide range of sensors that continuously produce sensor data streams. It is mandatory to process these data streams in an appropriate manner in order to satisfy different monitoring objectives, and it is anticipated that the complexity of MCPSs will continue to increase in the future. For instance, this includes the system description and the amount of data that must be processed. Accordingly, it is necessary to monitor these systems in order to provide reliability and to avoid critical damage. Monitoring is usually a semi-automatic process while human experts are responsible for consequent decisions. Thus, appropriate monitoring approaches are required to both provide a reasonably precise monitoring process and to reduce the complexity of the monitoring process itself.
The contribution of the present thesis is threefold. First, a knowledge discovery cycle (KDC) has been developed, which aims to combine the research areas of knowledge discovery in databases and knowledge discovery from data streams to monitor MCPSs. The KDC is a cyclic process chain comprising an online subcycle and an offline subcycle. Second, a new data stream anomaly detection algorithm has been developed. Since it is necessary to identify a large number of system states automatically during operation, data stream anomaly detection becomes a key task for monitoring MCPSs. Third, the KDC and the anomaly detection algorithm have been prototypically implemented and a case study has been performed in a real world scenario relating to the ISS Colombus module.
The discipline of software engineering is increasingly shifting from classical design and development tasks towards tasks concerning reuse, adaptation, and integration. Driving motivations for this shift are (i) decreasing development time and costs and (ii) increasing quality of design results. Unfortunately, these new tasks are not yet supported sufficiently. While classical approaches to information system's design are quite well suited to a design from scratch, they do not provide powerful concepts concerning reuse. In particular, methods concerning encapsulation of dialog structures are commonly neglected. An according approach which enables reuse of dialog structures poses substantially different problems. This thesis formalizes such an approach which is essentially based on models from three research areas: (i) interaction patterns, (ii) interaction specification, and (iii) component frameworks.
Today, developers of human-computer interaction increasingly face high expectations regarding the adaptivity of interaction. The World Wide Web is a fitting example for how diverse and 'fickle' the demands on interaction are which users, service providers and even the available technical infrastructure pose. In this thesis, a general framework for the development of interaction for such heterogeneous and dynamic interaction environments is created. The main focus is the provision of abstract constructs which make it possible to specify interaction abstractly, i.e., independently from concrete properties of an individual interaction environment. We then show how such an abstract specification can be used to automatically create human-computer interfaces which - due to the fact that the automatic generation takes the current interaction environment into account - are tailored to the requirements of the current user, technical infrastructure, etc.
The thesis discusses the problems of database development and maintenance, and presents an approach to conceptual tuning realized by conceptual design using the HERM/RADD notation. The RADD design tool has been designed in order to develope HERM specifications graphically. RADD adds semantics and operations to the design, which are not directly annotated on the graphical specification, such as "afunctional" dependencies and SQL operations and procedures. The RADD/raddstar system extends the graphical specification of the database schema with the posibility to specify the operations and with the invocations for transforming the schema, for evaluating transactions, and for optimizing the schema, each of which according the implicite requirements graphically modeled and the explicite requirements specified by means of the conceptual specification language (CSL). CSL is used as command line interface of the RADD/raddstar. The graphical RADD schema as well as the CSL specifications are compiled into terms of the RADD* data model by the system, such that these terms are used for further evaluation actions. The actions performed by the RADD/raddstar (schema transformation, transaction and cost evaluating, schema optimization) are based on rules, that can be developed and modified by the user using the CSL.
The dynamics of an information system (IS) is characterized not only by its computational behavior, but also by its interactive behavior. Interactive dynamics forms an integral part of most information systems. Despite this, an understanding of the interactive nature of an IS is still low. Interaction impacts expressiveness of an IS at such fundamental levels that Wegner [Weg97, Weg99a] came with a contention saying interactive behavior cannot be modeled by Turing Machines (TMs). A TM is considered the foundational model of computation. It models computable functions that map between problem and solution domains. However, a TM models only non-interactive mappings. A mapping between a problem and a solution domain that is interactive in nature can change its direction of computation resulting from intermediate interactions. Based on this contention, Wegner proposes interaction (rather than computation) as the fundamental framework for IS modeling [Weg99]. In this thesis, we address Wegner's contention and the nature of interactive dynamics. An information system is modeled as a collection of semantic processes or Problem Solving Processes (PSPs). If these PSPs are interactive in nature, they are called open systems; and if they are non-interactive, such an IS is called a closed system. Intuitively, open system dynamics are known to be richer than closed system dynamics. We make this distinction precise in this thesis. Interaction is shown to be made up of three properties: computation, persistence of state across computations, and channel sensitivity. Persistence of state and channel sensitivity each contribute to richer behavioral semantics than just computation. This is shown by introducing a concept called the solution space of a semantic process. A solution space is the abstract domain characterized by the process dynamics. Interactive solution spaces are found to be richer than algorithmic solution spaces and also interactive solution spaces require at least a three-valued system of logic for their characterization. The earlier question of interactive behavior as applied to IS design is then revisited. Interactive dynamics of an IS characterize the IS functionality. We call the solution space of interactive IS behavior as its interaction space. The interaction space of an IS is contrasted with the object space of the IS which is concerned with the IS structure and state maintenance dynamics. The interaction space has a degree of autonomy with respect to the object space. This aspect is often not acknowledged in IS design, resulting in the intermixing of structural and functionality concerns. Separating these concerns can avoid certain conflicting problems in IS design, as well as provide better maintainability. We call this the "dual" nature of open systems. Based on this insight we propose an IS design paradigm called dualism, where an IS model is made up of an object schema, characterizing the IS structure and an interaction schema, characterizing the IS functionality. The interaction schema is characterized by a three-valued system of logic, representing a set of obligated (or liveness) behavior, permitted (or possible) behavior and forbidden behavior. The system should perform the obligated behavior to be termed functional; it may perform any of the permitted behavior and it may not perform forbidden behavior. An analysis of the dynamics of any real world system can make these three-valued characteristics apparent. Domain theory is used to propose solution space concept, and deontic logic is used to represent the three modalities of interactive IS behavior.