Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik
Refine
Year of publication
- 2013 (6) (remove)
Document Type
- Doctoral Thesis (5)
- Conference Proceeding (1)
Language
- English (6)
Has Fulltext
- yes (6)
Is part of the Bibliography
- no (6)
Keywords
Precise, content-rich and well-structured document models are required for applications like verifying the consistency of documents. Creating such models for common documents is currently an expensive and error-prone process. In this thesis we present a novel approach to modelling and processing digital documents that uses semantic technologies. In contrast to other modelling approaches, we model the structure of documents as indicated by the content, not as defined by technical attributes like the file format. Additionally, our meta-model can be applied to a wide range of different documents, not just to a small set of documents with a predefined set of features. The models include semantic data and content relationships, which can be further extended with domain knowledge. Our new separation of technical and semantic document models fuels a standardised method for obtaining semantic models. This method is effective, suitable for live processing, and easily transferable to other document types and other domains. As it is makes extensive use of background knowledge, we also present techniques for obtaining such knowledge, and for representing complex forms of knowledge with multiple meta-layers. A flexible technique for obtaining relevant data from our document models completes the approach. This includes the ability to obtain various verification models, suitable for different types of consistency criteria and for different validation formalisms. We conclude this thesis with an evaluation that shows the viability and effectiveness of the proposed approach. We present runtime results for an implementation based on RDF/OWL and the rule language JBoss Drools that are adequate for live processing. We also provide and successfully apply techniques for measuring the quality of both document models and background knowledge.
This thesis addresses some of the algorithmic and numerical challenges associated with the computation of approximate border bases, a generalisation of border bases, in the context of the oil and gas industry. The concept of approximate border bases was introduced by D. Heldt, M. Kreuzer, S. Pokutta and H. Poulisse in "Approximate computation of zero-dimensional polynomial ideals" as an effective mean to derive physically relevant polynomial models from measured data. The main advantages of this approach compared to alternative techniques currently in use in the (hydrocarbon) industry are its power to derive polynomial models without additional a priori knowledge about the underlying physical system and its robustness with respect to noise in the measured input data. The so-called Approximate Vanishing Ideal (AVI) algorithm which can be used to compute approximate border bases and which was also introduced by D. Heldt et al. in the paper mentioned above served as a starting point for the research which is conducted in this thesis. A central aim of this work is to broaden the applicability of the AVI algorithm to additional areas in the oil and gas industry, like seismic imaging and the compact representation of unconventional geological structures. For this purpose several new algorithms are developed, among others the so-called Approximate Buchberger Möller (ABM) algorithm and the Extended-ABM algorithm. The numerical aspects and the runtime of the methods are analysed in detail - based on a solid foundation of the underlying mathematical and algorithmic concepts that are also provided in this thesis. It is shown that the worst case runtime of the ABM algorithm is cubic in the number of input points, which is a significant improvement over the biquadratic worst case runtime of the AVI algorithm. Furthermore, we show that the ABM algorithm allows us to exercise more direct control over the essential properties of the computed approximate border basis than the AVI algorithm. The improved runtime and the additional control turn out to be the key enablers for the new industrial applications that are proposed here. As a conclusion to the work on the computation of approximate border bases, a detailed comparison between the approach in this thesis and some other state of the art algorithms is given. Furthermore, this work also addresses one important shortcoming of approximate border bases, namely that central concepts from exact algebra such as syzygies could so far not be translated to the setting of approximate border bases. One way to mitigate this problem is to construct a "close by" exact border bases for a given approximate one. Here we present and discuss two new algorithmic approaches that allow us to compute such close by exact border bases. In the first one, we establish a link between this task, referred to as the rational recovery problem, and the problem of simultaneously quasi-diagonalising a set of complex matrices. As simultaneous quasi-diagonalisation is not a standard topic in numerical linear algebra there are hardly any off-the-shelf algorithms and implementations available that are both fast and numerically adequate for our purposes. To bridge this gap we introduce and study a new algorithm that is based on a variant of the classical Jacobi eigenvalue algorithm, which also works for non-symmetric matrices. As a second solution of the rational recovery problem, we motivate and discuss how to compute a close by exact border basis via the minimisation of a sum of squares expression, that is formed from the polynomials in the given approximate border basis. Finally, several applications of the newly developed algorithms are presented. Those include production modelling of oil and gas fields, reconstruction of the subsurface velocities for simple subsurface geometries, the compact representation of unconventional oil and gas bodies via algebraic surfaces and the stable numerical approximation of the roots of zero-dimensional polynomial ideals.
This thesis investigates the suitability of state-of-the-art protocols for large-scale and long-term environmental event monitoring using wireless sensor networks based on the application scenario of early forest fire detection. By suitable combination of energy-efficient protocol mechanisms a novel communication protocol, referred to as cross-layer message-merging protocol (XLMMP), is developed. Qualitative and quantitative protocol analyses are carried out to confirm that XLMMP is particularly suitable for this application area. The quantitative analysis is mainly based on finite-source retrial queues with multiple unreliable servers. While this queueing model is widely applicable in various research areas even beyond communication networks, this thesis is the first to determine the distribution of the response time in this model. The model evaluation is mainly carried out using Markovian analysis and the method of phases. The obtained quantitative results show that XLMMP is a feasible basis to design scalable wireless sensor networks that (1) may comprise hundreds of thousands of tiny sensor nodes with reduced node complexity, (2) are suitable to monitor an area of tens of square kilometers, (3) achieve a lifetime of several years. The deduced quantifiable relationships between key network parameters — e.g., node size, node density, size of the monitored area, aspired lifetime, and the maximum end-to-end communication delay — enable application-specific optimization of the protocol.
Up to a few years ago, the typical operation of a distributed architecture was modelled as the enactment of a collaborative protocol by networked nodes. In this context, all nodes were under the system designer’s control, faithfully executing the programmed behaviour. However, today’s networks are often characterized by a free aggregation of nodes. Thus, the possibility increases that a selfish party operates a node, which may violate the collaborative protocol in order to increase a personal benefit. If such violations differ from the system goals they can even be considered as attack. Current fault-tolerance techniques may weaken the harmful impact to some degree but they cannot necessarily prevent them. Furthermore, the several architectures differ in their fault-tolerance capabilities. This emphasizes the need for a systematic approach to achieve collaboration in distributed systems. In this PhD thesis we consider the problem of attaining a targeted level of collaboration in a distributed architecture deployed over rational selfish-driven nodes, which have interest in deviating from the communication protocol to increase a personal benefit. In order to reach this goal and to cover a broad spectrum of systems, we do not modify the architecture or communication protocol itself. Instead, we add a monitoring logic to inspect a node’s behaviour in terms of the correct interaction with the system. With this approach, the system designer needs to contrast several aspects such as the specific environmental circumstances, the inspection effort or the node’s individual preferences. Furthermore, he should consider the fact that each agent could be aware of the other agents’ preferences and selfishness, and perform strategic choices consequently. The natural frame for modelling such complex, interdependent and possibly interactive decision landscape is Game Theory (GT). In this context, the monitoring setup proposed in this thesis corresponds to a class of GT models known as Inspection Games (IG). Such games were introduced 1962 in their simplest formulation by Dresher in the context of non-proliferation treatises and arm control. They model the general situation where one inspector verifies through inspections the correct behaviour of another party, called inspectee. However, inspections are costly and the inspector’s resources are limited. Hence, a complete surveillance is not possible and an inspector will try to minimize the inspections. Finally, a game strategy combination (violating/inspecting or not) that is considered optimal by the parties represents a Nash equilibrium for the game. In this thesis, the initial IG model is enriched by the possibility of false negatives, i.e. the probability that a violation is not detected during an inspection. Both the initial and the enriched model remain abstract and can thus easily find interdisciplinary application. However, as solution approach in this thesis considering the context of distributed systems, it models the network participants’ strategy choice. As outcome, the IG model enables to calculate system parameters in order to shift the Nash equilibrium to the desired target collaboration. The approach is designed as framework. It can be therefore applied to any architecture considering, any selfish goal and any reliability technique. For sake of concreteness, we will discuss the IG approach by means of the illustrative case of a Publish/Subscribe (pub/sub) architecture. In this way messages over the communication infrastructure will have a specific associated semantics. The Inspection Game approach of this thesis secures the whole collaborative protocol in order to attain a correctly working system up to a specific degree (in the sense of collaboration). This represents a completely new way in terms of reliability mechanisms. Hence, this thesis can be considered as fundamental research. In order to enable a broad application, the generality of this approach is supported by further contributions. This is among others the software library RCourse for practical robustness evaluations of overlay networks and a simulation environment for further research of the abstract IG model. All developments will finally be published as open source software.
In this thesis a new approach to building product recommender systems is introduced. By using a customer-centric dialogue, the customers' preferences are elicited. These are the basis for inferring utility estimations about the desired technical properties of the products in question. Systems built this way can both operate autonomously, e.g., in an online store, and support a salesperson directly at the point-of-sale. The core of the approach is formed by a layered domain description that models customer stereotypes and needs, product attributes, the products themselves, and the causal interrelations between customer and product properties. Maintenance of the domain description, i.e., keeping the model up-to-date in face of frequent changes, is facilitated by the clear separation of concerns provided by the layered structure. In fact, the most frequently used class of updates can be handled in an entirely automated way if some constraints are satisfied. On a high level of abstraction, the system behavior is described by State Charts that are parameterized according to the domain description. Those parts of the system description where State Charts would be too imprecise are implemented by separate components realizing the required complex semantics. From the domain description, a Bayesian network is generated that forms the core of the inference engine of the recommender system. The network essentially controls the system-initiated dialogue flow and the recommendation process. Due to the characteristics of Bayesian networks, it is possible to respond to user-initiated dialogue steps in a natural way. Moreover, an explanation of the current recommendation can be generated without having to explicitly encode additional information in the modeling layer. Finally, a database structure and the SQL queries necessary to obtain recommendations can be inferred from the corresponding parts of the domain description. Instantiation of the system to a specific business domain is supported by a dedicated maintenance application that hides the complexities of the underlying algorithms. Thus, day-to-day system updates by non-technical domain experts, e.g., product managers, are facilitated. The developed concepts were implemented in cooperation with a local industry partner who intends to apply the recommender system in the field of mobile communications.
IMPACT 2013 in Berlin, Germany (in conjuction with HiPEAC 2013) is the third workshop in a series of international workshops on polyhedral compilation techniques. The previous workshops were held in Chamonix, France (2011) in conjuction with CGO 2011 and Paris, France (2012) in conjuction with HiPEAC 2012.