Dokument-ID Dokumenttyp Verfasser/Autoren Herausgeber Haupttitel Abstract Auflage Verlagsort Verlag Erscheinungsjahr Seitenzahl Schriftenreihe Titel Schriftenreihe Bandzahl ISBN Quelle der Hochschulschrift Konferenzname Quelle:Titel Quelle:Jahrgang Quelle:Heftnummer Quelle:Erste Seite Quelle:Letzte Seite URN DOI Abteilungen OPUS4-395 Dissertation Alsarem, Mazen Semantic Snippets via Query-Biased Ranking of Linked Data Entities In our knowledge-driven society, the acquisition and the transfer of knowledge play a principal role. Web search engines are somehow tools for knowledge acquisition and transfer from the web to the user. The search engine results page (SERP) consists mainly of a list of links and snippets (excerpts from the results). The snippets are used to express, as efficiently as possible, the way a web page may be relevant to the query. As an extension of the existing web, the semantic web or "web 3.0" is designed to convert the presently available web of unstructured documents into a web of data consumable by both human and machines. The resulting web of data and the current web of documents coexist and interconnect via multiple mechanisms, such as the embedded structured data, or the automatic annotation. In this thesis, we introduce a new interactive artifact for the SERP: the "Semantic Snippet". Semantic Snippets rely on the coexistence of the two webs to facilitate the transfer of knowledge to the user thanks to a semantic contextualization of the user's information need. It makes apparent the relationships between the information need and the most relevant entities present in the web page. The generation of semantic snippets is mainly based on the automatic annotation of the LOD1's entities in web pages. The annotated entities have different level of impor- tance, usefulness and relevance. Even with state of the art solutions for the automatic annotations of LOD entities within web pages, there is still a lot of noise in the form of erroneous or off-topic annotations. Therefore, we propose a query-biased algorithm (LDRANK) for the ranking of these entities. LDRANK adopts a strategy based on the linear consensual combination of several sources of prior knowledge (any form of con- textual knowledge, like the textual descriptions for the nodes of the graph) to modify a PageRank-like algorithm. For generating semantic snippets, we use LDRANK to find the more relevant entities in the web page. Then, we use a supervised learning algorithm to link each selected entity to excerpts from the web page that highlight the relationship between the entity and the original information need. In order to evaluate our semantic snippets, we integrate them in ENsEN (Enhanced Search Engine), a software system that enhances the SERP with semantic snippets. Finally, we use crowdsourcing to evaluate the usefulness and the efficiency of ENsEN. 2016 148 S. urn:nbn:de:bvb:739-opus4-3959 Fakultät für Informatik und Mathematik OPUS4-404 Dissertation Tomashevich, Victor Fault Tolerance Aspects of Virtual Massive MIMO Systems Employment of a very large number of antennas is seen as the key technology to provide future users with very high data rates. At the same time, the implementation complexity will rise due to large memories required and sophisticated signal processing algorithms employed. Continuous technology downscaling allows implementation of such complex digital designs. At the same time, its inherent variability and vulnerability to physical disturbances violate the assumption of perfectly reliable hardware operation. This work considers Unique Word OFDM which represents the alternative to the standard Cyclic Prefix OFDM providing superior detection quality. The generalization of Unique Word OFDM to a MIMO system is performed which allows interpretation as a virtual massive MIMO system with only few physical antennas. Detection methods for the introduced generalization are discussed and their performance is quantified. Because of the large memory size required, linear detection represents the cost and performance effective solution. The possible memory errors due to radiation effects or voltage scaling are addressed and the nonlinear MMSE detection algorithm is proposed. This algorithm keeps track of the memory errors and is able to significantly mitigate their effect on the quality of the estimated data. Apart of memory issues, reliability of the actual computational hardware which constitutes the receiver is of concern in this work. An own implementation of the MMSE Sorted Givens Rotations is subjected to transient fault injection. The impact of faults in various parts of the implemented circuit on the detection performance is quantified. Most vulnerable components of the implemented circuit in terms of reliability are identified. Security is another major address of this work, since most current implementations include cryptographic devices. Fault-based attacks on such systems are known to be able to extract the secret key in feasible time. The remaining part of this work addresses such fault injection-based malicious attacks. Countermeasures based on a combination of information and hardware redundancy are considered. Recently introduced robust codes target such attacks by providing guaranteed detection capability. The performance of these codes is assessed by application to actual cryptographic and general purpose circuits. The work introduces metrics that help to identify fault locations in the circuit which could escape detection with high probability. These locations are targeted by transistor resizing that renders fault injection unfeasible. 2016 183 urn:nbn:de:bvb:739-opus4-4047 Fakultät für Informatik und Mathematik OPUS4-541 Dissertation de Ponte Müller, Fabian Cooperative Relative Positioning for Vehicular Environments Fahrerassistenzsysteme sind ein wesentlicher Baustein zur Steigerung der Sicherheit im Straßenverkehr. Vor allem sicherheitsrelevante Applikationen benötigen eine genaue Information über den Ort und der Geschwindigkeit der Fahrzeuge in der unmittelbaren Umgebung, um mögliche Gefahrensituationen vorherzusehen, den Fahrer zu warnen oder eigenständig einzugreifen. Repräsentative Beispiele für Assistenzsysteme, die auf eine genaue, kontinuierliche und zuverlässige Relativpositionierung anderer Verkehrsteilnehmer angewiesen sind, sind Notbremsassitenten, Spurwechselassitenten und Abstandsregeltempomate. Moderne Lösungsansätze benutzen Umfeldsensorik wie zum Beispiel Radar, Laser Scanner oder Kameras, um die Position benachbarter Fahrzeuge zu schätzen. Dieser Sensorsysteme gemeinsame Nachteile sind deren limitierte Erfassungsreichweite und die Notwendigkeit einer direkten und nicht blockierten Sichtlinie zum Nachbarfahrzeug. Kooperative Lösungen basierend auf einer Fahrzeug-zu-Fahrzeug Kommunikation können die eigene Wahrnehmungsreichweite erhöhen, in dem Positionsinformationen zwischen den Verkehrsteilnehmern ausgetauscht werden. In dieser Dissertation soll die Möglichkeit der kooperativen Relativpositionierung von Straßenfahrzeugen mittels Fahrzeug-zu-Fahrzeug Kommunikation auf ihre Genauigkeit, Kontinuität und Robustheit untersucht werden. Anstatt die in jedem Fahrzeug unabhängig ermittelte Position zu übertragen, werden in einem neuartigem Ansatz GNSS-Rohdaten, wie Pseudoranges und Doppler-Messungen, ausgetauscht. Dies hat den Vorteil, dass sich korrelierte Fehler in beiden Fahrzeugen potentiell herauskürzen. Dies wird in dieser Dissertation mathematisch untersucht, simulativ modelliert und experimentell verifiziert. Um die Zuverlässigkeit und Kontinuität auch in "gestörten" Umgebungen zu erhöhen, werden in einem Bayesischen Filter die GNSS-Rohdaten mit Inertialsensormessungen aus zwei Fahrzeugen fusioniert. Die Validierung des Sensorfusionsansatzes wurde im Rahmen dieser Dissertation in einem Verkehrs- sowie in einem GNSS-Simulator durchgeführt. Zur experimentellen Untersuchung wurden zwei Testfahrzeuge mit den verschiedenen Sensoren ausgestattet und Messungen in diversen Umgebungen gefahren. In dieser Arbeit wird gezeigt, dass auf Autobahnen, die Relativposition eines anderen Fahrzeugs mit einer Genauigkeit von unter einem Meter kontinuierlich geschätzt werden kann. Eine hohe Zuverlässigkeit in der longitudinalen und lateralen Richtung können erzielt werden und das System erweist 90% der Zeit eine Unsicherheit unter 2.5m. In ländlichen Umgebungen wächst die Unsicherheit in der relativen Position. Mit Hilfe der on-board Sensoren können Fehler bei der Fahrt durch Wälder und Dörfer korrekt gestützt werden. In städtischen Umgebungen werden die Limitierungen des Systems deutlich. Durch die erschwerte Schätzung der Fahrtrichtung des Ego-Fahrzeugs ist vor Allem die longitudinale Komponente der Relativen Position in städtischen Umgebungen stark verfälscht. 2016 vii, 247 Seiten urn:nbn:de:bvb:739-opus4-5411 Fakultät für Informatik und Mathematik OPUS4-552 Dissertation Hanauer, Kathrin Linear Orderings of Sparse Graphs The Linear Ordering problem consists in finding a total ordering of the vertices of a directed graph such that the number of backward arcs, i.e., arcs whose heads precede their tails in the ordering, is minimized. A minimum set of backward arcs corresponds to an optimal solution to the equivalent Feedback Arc Set problem and forms a minimum Cycle Cover. Linear Ordering and Feedback Arc Set are classic NP-hard optimization problems and have a wide range of applications. Whereas both problems have been studied intensively on dense graphs and tournaments, not much is known about their structure and properties on sparser graphs. There are also only few approximative algorithms that give performance guarantees especially for graphs with bounded vertex degree. This thesis fills this gap in multiple respects: We establish necessary conditions for a linear ordering (and thereby also for a feedback arc set) to be optimal, which provide new and fine-grained insights into the combinatorial structure of the problem. From these, we derive a framework for polynomial-time algorithms that construct linear orderings which adhere to one or more of these conditions. The analysis of the linear orderings produced by these algorithms is especially tailored to graphs with bounded vertex degrees of three and four and improves on previously known upper bounds. Furthermore, the set of necessary conditions is used to implement exact and fast algorithms for the Linear Ordering problem on sparse graphs. In an experimental evaluation, we finally show that the property-enforcing algorithms produce linear orderings that are very close to the optimum and that the exact representative delivers solutions in a timely manner also in practice. As an additional benefit, our results can be applied to the Acyclic Subgraph problem, which is the complementary problem to Feedback Arc Set, and provide insights into the dual problem of Feedback Arc Set, the Arc-Disjoint Cycles problem. 2018 vii, 282 Seiten urn:nbn:de:bvb:739-opus4-5524 Fakultät für Informatik und Mathematik OPUS4-299 Dissertation Liebig, Jörg Analysis and Transformation of Configurable Systems Static analysis tools and transformation engines for source code belong to the standard equipment of a software developer. Their use simplifies a developer's everyday work of maintaining and evolving software systems significantly and, hence, accounts for much of a developer's programming efficiency and programming productivity. This is also beneficial from a financial point of view, as programming errors are early detected and avoided in the the development process, thus the use of static analysis tools reduces the overall software-development costs considerably. In practice, software systems are often developed as configurable systems to account for different requirements of application scenarios and use cases. To implement configurable systems, developers often use compile-time implementation techniques, such as preprocessors, by using #ifdef directives. Configuration options control the inclusion and exclusion of #ifdef-annotated source code and their selection/deselection serve as an input for generating tailor-made system variants on demand. Existing configurable systems, such as the linux kernel, often provide thousands of configuration options, forming a huge configuration space with billions of system variants. Unfortunately, existing tool support cannot handle the myriads of system variants that can typically be derived from a configurable system. Analysis and transformation tools are not prepared for variability in source code, and, hence, they may process it incorrectly with the result of an incomplete and often broken tool support. We challenge the way configurable systems are analyzed and transformed by introducing variability-aware static analysis tools and a variability-aware transformation engine for configurable systems' development. The main idea of such tool support is to exploit commonalities between system variants, reducing the effort of analyzing and transforming a configurable system. In particular, we develop novel analysis approaches for analyzing the myriads of system variants and compare them to state-of-the-art analysis approaches (namely sampling). The comparison shows that variability-aware analysis is complete (with respect to covering the whole configuration space), efficient (it outperforms some of the sampling heuristics), and scales even to large software systems. We demonstrate that variability-aware analysis is even practical when using it with non-trivial case studies, such as the linux kernel. On top of variability-aware analysis, we develop a transformation engine for C, which respects variability induced by the preprocessor. The engine provides three common refactorings (rename identifier, extract function, and inline function) and overcomes shortcomings (completeness, use of heuristics, and scalability issues) of existing engines, while still being semantics-preserving with respect to all variants and being fast, providing an instantaneous user experience. To validate semantics preservation, we extend a standard testing approach for refactoring engines with variability and show in real-world case studies the effectiveness and scalability of our engine. In the end, our analysis and transformation techniques show that configurable systems can efficiently be analyzed and transformed (even for large-scale systems), providing the same guarantees for configurable systems as for standard systems in terms of detecting and avoiding programming errors. 2015 160 urn:nbn:de:bvb:739-opus4-2996 Fakultät für Informatik und Mathematik OPUS4-304 Dissertation Braun, Bastian Web-based Secure Application Control The world wide web today serves as a distributed application platform. Its origins, however, go back to a simple delivery network for static hypertexts. The legacy from these days can still be observed in the communication protocol used by increasingly sophisticated clients and applications. This thesis identifies the actual security requirements of modern web applications and shows that HTTP does not fit them: user and application authentication, message integrity and confidentiality, control-flow integrity, and application-to-application authorization. We explore the other protocols in the web stack and work out why they can not fill the gap. Our analysis shows that the underlying problem is the connectionless property of HTTP. However, history shows that a fresh start with web communication is far from realistic. As a consequence, we come up with approaches that contribute to meet the identified requirements. We first present impersonation attack vectors that begin before the actual user authentication, i.e. when secure web interaction and authentication seem to be unnecessary. Session fixation attacks exploit a responsibility mismatch between the web developer and the used web application framework. We describe and compare three countermeasures on different implementation levels: on the source code level, on the framework level, and on the network level as a reverse proxy. Then, we explain how the authentication credentials that are transmitted for the user login, i.e. the password, and for session tracking, i.e. the session cookie, can be complemented by browser-stored and user-based secrets respectively. This way, an attacker can not hijack user accounts only by phishing the user's password because an additional browser-based secret is required for login. Also, the class of well-known session hijacking attacks is mitigated because a secret only known by the user must be provided in order to perform critical actions. In the next step, we explore alternative approaches to static authentication credentials. Our approach implements a trusted UI and a mutually authenticated session using signatures as a means to authenticate requests. This way, it establishes a trusted path between the user and the web application without exchanging reusable authentication credentials. As a downside, this approach requires support on the client side and on the server side in order to provide maximum protection. Another approach avoids client-side support but can not implement a trusted UI and is thus susceptible to phishing and clickjacking attacks. Our approaches described so far increase the security level of all web communication at all time. This is why we investigate adaptive security policies that fit the actual risk instead of permanently restricting all kinds of communication including non-critical requests. We develop a smart browser extension that detects when the user is authenticated on a website meaning that she can be impersonated because all requests carry her identity proof. Uncritical communication, however, is released from restrictions to enable all intended web features. Finally, we focus on attacks targeting a web application's control-flow integrity. We explain them thoroughly, check whether current web application frameworks provide means for protection, and implement two approaches to protect web applications: The first approach is an extension for a web application framework and provides protection based on its configuration by checking all requests for policy conformity. The second approach generates its own policies ad hoc based on the observed web traffic and assuming that regular users only click on links and buttons and fill forms but do not craft requests to protected resources. 2015 urn:nbn:de:bvb:739-opus4-3048 Fakultät für Informatik und Mathematik OPUS4-479 Dissertation Fischer, Andreas An Evaluation Methodology for Virtual Network Embedding The increasing scale and complexity of computer networks imposes a need for highly flexible management mechanisms. The concept of network virtualization promises to provide this flexibility. Multiple arbitrary virtual networks can be constructed on top of a single substrate network. This allows network operators and service providers to tailor their network topologies to the specific needs of any offered service. However, the assignment of resources proves to be a problem. Each newly defined virtual network must be realized by assigning appropriate physical resources. For a given set of virtual networks, two questions arise: Can all virtual networks be accommodated in the given substrate network? And how should the respective resources be assigned? The underlying problem is commonly known as the Virtual Network Embedding problem. A multitude of algorithms has already been proposed, aiming to provide solutions to that problem under various constraints. For the evaluation of these algorithms typically an empirical approach is adopted, using artificially created random problem instances. However, due to complex effects of random problem generation the obtained results can be hard to interpret correctly. A structured evaluation methodology that can avoid these effects is currently missing. This thesis aims to fill that gap. Based on a thorough understanding of the problem itself, the effects of random problem generation are highlighted. A new simulation architecture is defined, increasing the flexibility for experimentation with embedding algorithms. A novel way of generating embedding problems is presented which migitates the effects of conventional problem generation approaches. An evaluation using these newly defined concepts demonstrates how new insights on algorithm behavior can be gained. The proposed concepts support experimenters in obtaining more precise and tangible evaluation data for embedding algorithms. 2017 XVII, 179 S. urn:nbn:de:bvb:739-opus4-4793 Fakultät für Informatik und Mathematik OPUS4-286 Dissertation Rosenthal, Kristina Die Tits-Alternative für eine relevante Klasse endlich präsentierter Gruppen unter besonderer Berücksichtigung computeralgebraischer Aspekte Eine Gruppe erfüllt die Tits-Alternative, wenn sie entweder eine nicht-abelsche freie Untergruppe vom Rang 2 enthält oder virtuell auflösbar ist, d. h. eine auflösbare Untergruppe von endlichem Index enthält. Diese Eigenschaft geht auf J. Tits zurück, der sie für endlich erzeugte lineare Gruppen nachwies. Es wird eine relevante Klasse endlich präsentierter Gruppen in Bezug auf die Tits-Alternative untersucht. Die betrachteten Gruppen stellen eine Verallgemeinerung von Pride-Gruppen und der von Vinberg betrachteten Gruppen erzeugt von periodisch gepaarten Relationen für drei Erzeugende dar. Zusätzlich treten diese Gruppen als Fundamentalgruppen hyperbolischer Orbifaltigkeiten auf. Es wird der Nachweis der Tits-Alternative unter bestimmten Voraussetzungen an die Präsentierungen der betrachteten Gruppen erbracht. Für diesen Nachweis werden verschiedene Methoden angewandt: Es werden zum einen homomorphe Bilder der Gruppen betrachtet. Zum anderen wird die Existenz wesentlicher Darstellungen in eine lineare Gruppe nachgewiesen. Basierend auf diesen Darstellungen kann in vielen Fällen der Nachweis der Existenz nicht-abelscher freier Untergruppen erbracht werden. Zusätzlich wird für den Nachweis der Endlichkeit und damit der Tits-Alternative einiger Gruppen eine Methode angewandt, die auf Berechnungen von Gröbner-Basen in nicht-kommutativen Polynomringen basiert. Es wird dabei die Dimension der Gruppenringe betrachtet als Vektorräume berechnet. Für die betrachtete Klasse von Gruppen wird für Relationen der Blocklänge 1 die Tits-Alternative vollständig nachgewiesen. Als Folgerung ergibt sich eine Klassifikation der endlichen unter diesen Gruppen. 2014 urn:nbn:de:bvb:739-opus4-2865 Fakultät für Informatik und Mathematik OPUS4-827 Dissertation Kurz, Thomas Adapting Semantic Web Information Retrieval to Multimedia The amount of audio, video and image data on the Web is immensely growing, which leads to data management problems based on the hidden character of Multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the Internet of documents and the Web of Data has become a common practice. However, the value of connecting media to its semantic meta data is limited due to lacking access methods and the absence of an adapted query language specialized for media assets and fragments. This thesis aims to extend the standard query language for the Semantic Web (SPARQL) with media specific concepts and functions. The main contributions of the work are an exhaustive survey on Multimedia query languages of the last 3 decades, the SPARQL extension specification itself and an approach for the efficient evaluation of the new query concepts. Additionally I elaborate and evaluate a meta data based media fragment similarity approach, which provides a basis for further language extensions. 2019 xvi, 206 Seiten urn:nbn:de:bvb:739-opus4-8276 Fakultät für Informatik und Mathematik OPUS4-836 Dissertation Planche, Benjamin Bridging the Realism Gap for CAD-Based Visual Recognition Computer vision aims at developing algorithms to extract high-level information from images and videos. In the industry, for instance, such algorithms are applied to guide manufacturing robots, to visually monitor plants, or to assist human operators in recognizing specific components. Recent progress in computer vision has been dominated by deep artificial neural network, i.e., machine learning methods simulating the way that information flows in our biological brains, and the way that our neural networks adapt and learn from experience. For these methods to learn how to accurately perform complex visual tasks, large amounts of annotated images are needed. Collecting and labeling such domain-relevant training datasets is, however, a tedious—sometimes impossible—task. Therefore, it has become common practice to leverage pre-available three-dimensional (3D) models instead, to generate synthetic images for the recognition algorithms to be trained on. However, methods optimized over synthetic data usually suffer a significant performance drop when applied to real target images. This is due to the realism gap, i.e., the discrepancies between synthetic and real images (in terms of noise, clutter, etc.). In my work, three main directions were explored to bridge this gap. First, an innovative end-to-end framework is proposed to render realistic depth images from 3D models, as a growing number of solutions (especially in the industry) are utilizing low-cost depth cameras (e.g., Microsoft Kinect and Intel RealSense) for recognition tasks. Based on a thorough study of these devices and the different types of noise impairing them, the proposed framework simulates their inner mechanisms, comprehensively modeling vital factors such as sensor noise, material reflectance, surface geometry, etc. Able to simulate a wide panel of depth sensors and to quickly generate large datasets, this framework is used to train algorithms for various recognition tasks, consistently and significantly enhancing their performance compared to other state-of-the-art simulation tools. In some cases, however, relevant 2D or 3D object representations to generate synthetic samples are not available. Considering this different case of data scarcity, a solution is then proposed to incrementally build a representation of visual scenes from partial observations. Provided observations are localized from one to another based on their content and registered in a global memory with spatial properties. Simultaneously, this memory can be queried to render novel views of the scene. Furthermore, unobserved regions can be hallucinated in memory, in consistence with previous observations, hallucinations, and global priors. The efficacy of the proposed mnemonic and generative system, trainable end-to-end, is demonstrated on various 2D and 3D use-cases. Finally, an advanced convolutional neural network pipeline is introduced, tackling the realism gap from a novel angle. While most methods addressing this problem focus on bringing synthetic samples—or the knowledge acquired from them—closer to the real target domain, the proposed solution performs the opposite process, mapping unseen target images into controlled synthetic domains. The pre-processed samples can then be handed to downstream recognition methods, themselves purely trained on similar synthetic data, to greatly improve their accuracy. For each approach, a variety of qualitative and quantitative studies are detailed, providing successful comparisons to state-of-the-art methods. By proposing solutions to bridge the realism gap from either side, as well as a pipeline to improve the acquisition and generation of new visual content, this thesis provides a unique perspective on the challenges of data scarcity when building robust recognition systems. 2020 xx, 152 urn:nbn:de:bvb:739-opus4-8361 Fakultät für Informatik und Mathematik OPUS4-771 Dissertation Lucas, Yvan Credit card fraud detection using machine learning with integration of contextual knowledge We have proposed a strategy for the creation of attributes based on hidden Markov models (HMM) characterizing the transaction from different points of view. This strategy makes it possible to integrate a broad spectrum of sequential information into the attributes of transactions. In fact, we model the authentic and fraudulent behavior of merchants and card holders according to two univariate characteristics: the date and the amount of transactions. In addition, attributes based on HMMs are created in a supervised manner, thereby reducing the need for expert knowledge for the creation of the fraud detection system. Ultimately, our HMM-based multi-perspective approach allows automated data pre-processing to model time correlations to complement and eventually replace transaction aggregation strategies to improve detection efficiency. Experiments carried out on a large set of credit card transaction data from the real world (46 million transactions carried out by Belgian card holders between March and May 2015) have shown that the strategy proposed for data preprocessing based on HMM can detect more fraudulent transactions when combined with the strategy of preprocessing reference data based on expert knowledge for the detection of credit card fraud. 2019 xxi, 125 Seiten urn:nbn:de:bvb:739-opus4-7713 Fakultät für Informatik und Mathematik OPUS4-900 Dissertation Ansah, Frimpong Performance and optimization technologies for software defined industrial networks The concept of programmable networks is radically changing the way communication infrastructures are designed, integrated, and operated. Currently, the topic is spearheaded by concepts such as software-defined networking, forwarding and control element separation, and network function virtualization. Notably, software-defined networking has attracted significant attention in telecommunication and data centers and thus already in some production-grade networks. Despite the prevalence of software-defined networking in these domains, industrial networks are yet to see its benefits to encourage adoption. However, the misconceptions around the concept itself, the role of virtualization, and algorithms pose a significant obstacle. Furthermore, the desire to accommodate new services in the automation industry results in a pattern of constantly increasing complexity of industrial networks, which is compounded by the requirement to provide stringent deterministic service guarantees considering characteristically different applications and thus posing a significant challenge for management, configuration, and maintenance as existing solutions are architecturally inflexible. Therefore, the first contribution of this thesis addresses the misconceptions around software-defined networking by providing a comparative analysis of programmable network concepts, detailing where software-defined networks compare with other concepts and how its principles can be leveraged to evolve industrial networks. Armed with the fundamental principles of programmable networks, the second contribution identifies virtualization technologies and proposes novel algorithms to provide varied quality of service guarantees on converged time-sensitive Ethernet networks using software-defined networking concepts. Finally, a performance analysis of a software-defined hybrid deployment solution for control and management of time-sensitive Ethernet networks that integrates proposed novel algorithms is presented as an industrial use-case that enables industrial operators to harness the full potential of time-sensitive networks. Passau Universität Passau 2021 xxi, 173 Seiten urn:nbn:de:bvb:739-opus4-9002 Fakultät für Informatik und Mathematik OPUS4-757 Dissertation Charpenay, Victor Semantics for the Web of Things: Modeling the Physical World as a Collection of Things and Reasoning with their Descriptions The main research question of this thesis is to develop a theory that would provide foundations for the development of Web of Things (WoT) systems. A theory for WoT shall provide a model of the 'things' WoT agents relate to such that these relations determine what interactions take place between these agents. This thesis presents a knowledge-based approach in which the semantics of WoT systems is given by a transformation (an homomorphism) between a graph representing agent interactions and a knowledge graph describing 'things'. It focuses on three aspects of knowledge graphs in particular: the vocabulary with which assertions can be made, the rules that can be defined over this vocabulary and its serialization to efficiently exchange pieces of a knowledge graph. Each aspect is developed in a dedicated chapter, with specific contributions to the state-of-the-art. The need for a unified vocabulary to describe 'things' in WoT and the Internet of Things (IoT) has been identified early on in the literature. Many proposals have been consequently published, in the form of Web ontologies. In Ch. 2, a systematic review of these proposals is being developed, as well as a comparison with the data models of the principal IoT frameworks and protocols. The contribution of the thesis in that respect is an alignment between the Thing Description (TD) model and the Semantic Sensor Network (SSN) ontology, two standards of the World Wide Web Consortium (W3C). The scope of this thesis is generally limited to Web standards, especially those defined by the Resource Description framework (RDF). Web ontologies do not only expose a vocabulary but also rules to extend a knowledge graph by means of reasoning. Starting from a set of TD documents, new relations between 'things' can be "discovered" this way, indicating possible interactions between the servients that relate to them. The experiments presented in Ch. 3 were done on the basis of this semantic discovery framework on two use cases: a building automation use case provided by Intel Labs and an industrial control use case developed internally at Siemens. The relations to discover often involve anonymous nodes in the knowledge graph: the chapter also introduces a novel skolemization algorithm to correctly process these nodes on a well-defined fragment of the Web Ontology Language (OWL). Finally, because this semantic discovery framework relies on the exchange of TD documents, Ch. 4 introduces a binary format for RDF that proves efficient in serializing TD assertions such that even the smallest WoT agents, i.e. micro-controllers, can store and process them. A formalization for the semantics-preserving compaction and querying of TD documents is also introduced in this chapter, at the basis of an embedded RDF store called the µRDF store. The ability of all WoT agents to query logical assertions about themselves and their environment, as found in TD documents, is a first step towards knowledge-based intelligent systems that can operate autonomously and dynamically in a decentralized way. The µRDF store is an attempt to illustrate the practical outcomes of the theory of WoT developed throughout this thesis. 2019 xiii, 127 Seiten urn:nbn:de:bvb:739-opus4-7578 Fakultät für Informatik und Mathematik OPUS4-670 Dissertation Berndl, Emanuel Embedding a Multimedia Metadata Model into a Workflow-driven Environment Using Idiomatic Semantic Web Technologies The Semantic Web exists for about 20 years by now, but its applicability as well as its presence does not live up to the standards of its original idea. Incorporated Semantic Web Technologies do have an initial barrier to learn and apply, which can discourage many potential users. This leads to less available data overall in addition to decreased data quality. This work solves parts of the aforementioned problem by supporting idiomatic entry to those Semantic Web Technologies, allowing for "easier" accessibility and usability. Anno4j is a Java library that implements a form of Object-Relational Mapping for RDF data. With its application, RDF data can be created via a mapping by simply instantiating Java objects - an object-oriented programming concept the user is familiar with. On the other side, requesting persisted data is supported by a path-based querying possibility, while other features like transactional behaviour, code generation, and automated validation of input contribute to a more effective, comprehensive, and straightforward usage. A use-case is provided by the MICO Platform, a centralized software instance that connects autonomous multimedia extractors in a workflow-driven fashion. This leads to a rich metadata background for the inserted multimedia files, enabling them to be used in diverse scenarios as well as unlocking yet hidden semantics. For this task it was necessary to design and implement a metadata model that is able to aggregate and merge the varying extractor results under a common denominator: the MICO Metadata Model. The results of this work allow the use case to incorporate idiomatic Semantic Web Technologies which are then usable natively by non-Semantic Web experts. Additionally, an increase has been achieved in forms of data integration, synchronisation, integrity and validity, as well as an overall more comprehensive and rich implementation of the multimedia extractors. 2018 xix, 301 Seiten urn:nbn:de:bvb:739-opus4-6708 Fakultät für Informatik und Mathematik OPUS4-673 Dissertation Kolesnikov, Sergiy Feature Interactions in Configurable Software Systems Software has become an important part of our life. Therefore, the number of different applications scenarios and user requirements of software systems grows rapidly. To satisfy these requirements, software vendors build configurable software systems that can be tailored to diverse needs without rebuilding them from scratch, which reduces costs and development time. Despite considerable advances in software engineering, which allow building high-quality configurable software systems, some challenges remain. One of these challenges is the feature interaction problem that arises when parts (features), from which a configurable system is composed, interact in unexpected ways, and inadvertently change the behavior or quality attributes (such as performance) of the system. The goal of this dissertation is to systematically study the nature of feature interactions, their causes, their influence on performance of configurable systems, and, based on empirical results, suggest ways of improving techniques for detecting and predicting feature interactions. More specifically, we compared and evaluated different strategies for the analysis of configurable software systems. The results of our evaluation complement empirical data from previous work about how different analysis strategies for configurable software systems compare with respect to different aspects, such as performance. These results shall be used to develop effective and scalable techniques and tools for analysis of configurable software including feature-interaction detection and prediction techniques and tools. Technically, we used a machine-learning technique to quantify the influence of feature interactions on performance of real-world configurable systems. We studied the characteristics of interactions that have the largest influence on performance and found that interactions among few features have higher influence than interactions among many features. With a growing number of interacting features, the influence of the corresponding interactions decreases consistently. This implies that interactions involving multiple features can be ignored in practice because of their marginal influence on performance. We also investigated the causes of the interactions and were able to identify several patterns that link these interactions to the architecture of the systems: For example, we found that if a data processing system consisted of multiple features that processed the same data in sequence then these features interacted. The identified patterns can help to anticipate performance interactions already at an early development stage when a system's architecture is designed. Furthermore, considering that control-flow interactions (observable at the level of control flow among features) are easier to detect than performance interactions (externally observable through measuring performance of different combinations of features), we conducted a case study on two configurable systems. In this case study, we investigated a possible relation among control-flow feature interactions and performance feature interactions. We also discussed how this relation can be exploited by interaction detection and performance prediction techniques to make them more time efficient and precise. Our case study on two real-world configurable systems revealed that a relation indeed exists, and we were able to show how it can be used to reduce the search space of possibly existing performance interactions. The study can serve as a blueprint for further studies that can rely on our conceptual framework for investigating relations among external and internal interactions. Overall, the contribution of this dissertation consists of scientific and technical insights, practical tool implementations, empirical evaluations, and case studies that advance the current state of research in the area of feature interactions in configurable software systems. In particular, we provide insights into the causes of feature interactions and their influence on performance of real-world configurable systems (e.g., interaction patterns, decreasing influence of interactions with growing number of involved features). Our results also suggest ways of improving techniques for detecting and predicting feature interactions (e.g., ignoring interactions among multiple features, reducing the search space based on relations among interactions). 2019 ix, 140 Seiten urn:nbn:de:bvb:739-opus4-6739 Fakultät für Informatik und Mathematik OPUS4-846 Dissertation Stahlbauer, Andreas Abstract Transducers for Software Analysis and Verification Whenever software faults can endanger human life, property, or the environment, the absence of faults must be ensured with utmost care and the best technologies available. Evidence is needed showing that all requirements are satisfied and that the risk of faults is reduced. One technique to conduct such a verification task—composed of the software to verify, the specification to check, and a model of the environment—is software model checking. To conduct a verification task with a model checker, different models of the task are constructed. We distinguish between two types of task models: syntactic task models and semantic task models, which define the respective syntactic structure (control flow) and semantic structure (state transitions, invariants) of the verification task. When constructing such models, we can observe that similar structures and substructures reappear within and among different verification tasks. For example, the same assertions to check can appear in different functions, or the same predicate can be part of different invariants to describe sets of program states. Similarities that appear during the model construction process can be the result of solving similar reasoning problems, often solved using computationally expensive procedures (as typical for model checking), over and over again. Not reusing results of solving similar problems, not having a means for conducting repeated efforts automatically, or not trying to reduce the number of similar reasoning efforts, is a waste of precious resources. To address these problems, we present a common conceptual and technical foundation for sharing syntactic and semantic task artifacts for reuse, within and among verification runs. Both the syntactic construction of a verification task and the construction of its semantic model—which describes all possible behaviors and states—are covered. We study how commonalities and regularities in the task models can be taken into account to facilitate the process of sharing task artifacts for reuse, and to make the overall verification process more efficient and effective. We introduce abstract transducers as the theoretical foundation of this thesis: a type of finite-state transducers with an inherent notion of abstraction for states, the input alphabet, and its output alphabet. Abstracting these transducers allows us to widen both the set of input words for that they produce output and the sets of output words. Abstract transducers are instantiated as task artifact transducers to map from program structures to task artifacts to share. We show that the notion of abstraction provides a means for increasing the scope for that task artifacts are shared for reuse. We present two instances of task artifact transducers: Yarn transducers and precision transducers. We use Yarn transducers for providing code to weave into the control-flow structure of a computer program, and present the Loom analysis as a means for orchestrating the weaving process. Precision transducers provide a means for sharing abstraction precisions for reuse, thus aid in defining the level of abstraction of a semantic task model. For both types of transducers, we provide empirical evidence on their practical applicability, for example, to verify Linux kernel modules, and show that they can help in increasing the verification performance. 2019 xv, 187 Seiten urn:nbn:de:bvb:739-opus4-8468 Fakultät für Informatik und Mathematik OPUS4-870 Dissertation Garchery, Mathieu User-centered intrusion detection using heterogeneous data With the frequency and impact of data breaches raising, it has become essential for organizations to automate intrusion detection via machine learning solutions. This generally comes with numerous challenges, among others high class imbalance, changing target concepts and difficulties to conduct sound evaluation. In this thesis, we adopt a user-centered anomaly detection perspective to address selected challenges of intrusion detection, through a real-world use case in the identity and access management (IAM) domain. In addition to the previous challenges, salient properties of this particular problem are high relevance of categorical data, limited feature availability and total absence of ground truth. First, we ask how to apply anomaly detection to IAM audit logs containing a restricted set of mixed (i.e. numeric and categorical) attributes. Then, we inquire how anomalous user behavior can be separated from normality, and this separation evaluated without ground truth. Finally, we examine how the lack of audit data can be alleviated in two complementary settings. On the one hand, we ask how to cope with users without relevant activity history ("cold start" problem). On the other hand, we seek how to extend audit data collection with heterogeneous attributes (i.e. categorical, graph and text) to improve insider threat detection. After aggregating IAM audit data into sessions, we introduce and compare general anomaly detection methods for mixed data to a user identification approach, designed to learn the distinction between normal and malicious user behavior. We find that user identification outperforms general anomaly detection and is effective against masquerades. An additional clustering step allows to reduce false positives among similar users. However, user identification is not effective against insider threats. Furthermore, results suggest that the current scope of our audit data collection should be extended. In order to tackle the "cold start" problem, we adopt a zero-shot learning approach. Focusing on the CERT insider threat use case, we extend an intrusion detection system by integrating user relations to organizational entities (like assignments to projects or teams) in order to better estimate user behavior and improve intrusion detection performance. Results show that this approach is effective in two realistic scenarios. Finally, to support additional sources of audit data for insider threat detection, we propose a method representing audit events as graph edges with heterogeneous attributes. By performing detection at fine-grained level, this approach advantageously improves anomaly traceability while reducing the need for aggregation and feature engineering. Our results show that this method is effective to find intrusions in authentication and email logs. Overall, our work suggests that masquerades and insider threats call for different detection methods. For masquerades, user identification is a promising approach. To find malicious insiders, graph features representing user context and relations to other entities can be informative. This opens the door for tighter coupling of intrusion detection with user identities, roles and privileges used in IAM solutions. 2020 vii, 119 Seiten urn:nbn:de:bvb:739-opus4-8704 Fakultät für Informatik und Mathematik OPUS4-871 Dissertation Koop, Martin Preventing the Leakage of Privacy Sensitive User Data on the Web Das Aufzeichnen der Internetaktivität ist mit der Verknüpfung persönlicher Daten zu einer Schlüsselressource für viele kostenpflichtige und kostenfreie Dienste im Web geworden. Diese Dienste sind zum einen Webanwendungen, wie beispielsweise die von Google bereitgestellten Karten/Navigation oder Websuche, die täglich kostenlos verwendet werden. Zum anderen sind es alle Webseiten, die meist kostenlos Nachrichten oder allgemeine Informationen zu verschiedenen Themen bereitstellen. Durch das Aufrufen und die Nutzung dieser Webdienste werden alle Informationen, die im Webdienst verarbeitet werden, an den Dienstanbieter weitergeben. Dies umfasst nicht nur die im Benutzerkonto des Webdienstes gespeicherte Profildaten wie Name oder Adresse, sondern auch die Aktivität mit dem Webdienst wie das anklicken von Links oder die Verweildauer. Darüber hinaus gibt es jedoch auch unzählige Drittparteien, welche zumeist im Hintergrund in die Webdienste eingebunden sind und das Benutzerverhalten der kompletten Webaktivität - Webseiten übergreifend - mitspeichern sowie auswerten. Der Einsatz verschiedener, in der Regel für den Benutzer verborgener Techniken, dient dazu das Online-Verhalten der Benutzer genau zu verfolgen und viele sensible Daten zu sammeln. Dieses Verhalten wird als Web-Tracking bezeichnet und wird hauptsächlich von Werbeunternehmen genutzt. Die gesammelten Daten sind oft personenbezogen und eine wertvolle Ressourcen der Unternehmen, um Beispielsweise passend zum Benutzerprofil personalisierte Werbung schalten zu können. Mit der Nutzung dieser personenbezogenen Daten entstehen aber auch weitreichendere Auswirkungen, welche sich unter anderem in Preisanpassungen für Benutzer mit speziellen Profilattributen, wie der Nutzung von teuren Endgeräten, widerspiegeln. Ziel dieser Arbeit ist es die Privatsphäre der Nutzer im Internet zu steigern und die Nutzerverfolgung von Web-Tracking signifikant zu reduzieren. Dabei stellen sich vier Herausforderungen, die jeweils einen Forschungsschwerpunkt dieser Arbeit bilden: (1) Systematische Analyse und Einordnung eingesetzter Tracking-Techniken, (2) Untersuchung vorhandener Schutzmechanismen und deren Schwachstellen,(3) Konzeption einer Referenzarchitektur zum Schutz vor Web-Tracking und (4) Entwurf einer automatisierten Testumgebungen unter Realbedingungen, um die Reduzierung von Web-Tracking in den entwickelten Schutzmaßnahmen zu untersuchen. Jeder dieser Forschungsschwerpunkte stellt neue Beiträge bereit, um einheitlich das übergeordnete Ziel zu erreichen: der Entwicklung von Schutzmaßnahmen gegen die Preisgabe sensibler Benutzerdaten im Internet. Der erste wissenschaftliche Beitrag dieser Dissertation ist eine umfassende Evaluation eingesetzter Web-Tracking Techniken und Methoden, sowie deren Gefahren, Risiken und Implikationen für die Privatsphäre der Internetnutzer. Die Evaluation beinhaltet zusätzlich die Untersuchung vorhandener Tracking-Schutzmechanismen und deren Schwachstellen. Die gewonnenen Erkenntnisse sind maßgeblich für die in dieser Arbeit neu entwickelten Ansätze und verbessern den bisherigen nicht hinreichend gewährleisteten Schutz vor Web-Tracking. Der zweite wissenschaftliche Beitrag ist die Entwicklung einer robusten Klassifizierung von Web-Tracking, der Entwurf einer effizienten Architektur zur Langzeituntersuchung von Web-Tracking sowie einer interaktiven Visualisierung des Auftreten von Web-Tracking im Internet. Dabei basiert der neue Klassifizierungsansatz, um Tracking zu identifizieren, auf der Entropie Messung des Informationsgehalts von Cookies. Die Resultate der Web-Tracking Langzeitstudien sind unter anderem 1.209 identifizierte Tracking-Domains auf den meistbesuchten Webseiten in Deutschland. Hierbei wurden innerhalb der Top 25 Webseiten im Durchschnitt 45 Tracking-Elemente pro Webseite gefunden. Der Tracker mit dem höchsten Potenzial zum Erstellen eines Benutzerprofils war doubleclick.com, da er 90% der Webseiten überwacht. Die Auswertung des untersuchten Tracking-Netzwerks ergab weiterhin einen detaillierten Einblick in die Tracking-Technik mithilfe von Weiterleitungslinks. Dabei haben wir 1,2 Millionen HTTP-Traces von monatelangen Crawls der 50.000 international meistbesuchten Webseiten analysiert. Die Ergebnisse zeigen, dass 11,6% dieser Webseiten HTTP-Redirects, verborgen in Webseiten-Links, zum Tracken verwenden. Dies wird eingesetzt, um den Webseitenverlauf des Benutzers nach dem Klick durch eine Kette von (Tracking-)Servern umzuleiten, welche in der Regel nicht sichtbar sind, bevor das beabsichtigte Link-Ziel geladen wird. In diesem Szenario erfasst der Tracker wertvolle Verbindungs-Metadaten zu Inhalt, Thema oder Benutzerinteressen der Website. Die Visualisierung des Tracking Ökosystem stellen wir in einem interaktiven Open-Source Web-Tool bereit. Der dritte wissenschaftliche Beitrag dieser Dissertation ist die Konzeption von zwei neuartigen Schutzmechanismen gegen Web-Tracking und der Aufbau einer automatisierten Simulationsumgebung unter Realbedingungen, um die Effektivität der Umsetzungen zu verifizieren. Der Fokus liegt auf den beiden meist verwendeten Tracking-Verfahren: Cookies (hierbei wird eine eindeutigen ID auf dem Gerät des Benutzers gespeichert), sowie Browser-Fingerprinting. Letzteres beschreibt eine Methode zum Sammeln einer Vielzahl an Geräteeigenschaften, um den Benutzer eindeutig zu (re- )identifizieren, ohne eine eindeutige ID auf dem Gerät zu speichern. Um die Effektivität der in dieser Arbeit entwickelten Schutzmechanismen vor Web-Tracking zu untersuchen, implementierten und evaluierten wir die Schutzkonzepte direkt im Chromium Browser. Das Ergebnis zeigt eine erfolgreiche Reduzierung von Web-Tracking um 44%. Zusätzlich verbessert das in dieser Arbeit entwickelte Konzept "Site Isolation" den Datenschutz des privaten Browsing-Modus, ermöglicht das Setzen eines manuellen Speicher-Zeitlimits von Cookies und schützt den Browser gegen verschiedene Bedrohungen wie CSRF (Cross-Site Request Forgery) oder CORS (Cross-Origin Ressource Sharing). Site Isolation speichert dabei den Status der lokalen Website in separaten Containern und kann dadurch diverse Tracking-Methoden wie Cookies, lokalStorage oder redirect tracking verhindern. Bei der Auswertung von 1,6 Millionen Webseiten haben wir gezeigt, dass der Tracker doubleclick.com das höchste Potenzial besitzt, den Nutzer zu verfolgen und auf 25% der 40.000 international meistbesuchten Webseiten vertreten ist. Schließlich demonstrieren wir in unserem erweiterten Chromium-Browser einen robusten Browser-Fingerprinting-Schutz. Der Test unseres Prototyps mittels 70.000 Browsersitzungen zeigt, dass unser Browser den Nutzer vor sogenanntem Browser-Fingerprinting Tracking schützt. Im Vergleich zu fünf anderen Browser-Fingerprint-Tools erzielte unser Prototyp die besten Ergebnisse und ist der erste Schutzmechanismus gegen Flash sowie Canvas Fingerprinting. 2021 137 Seiten urn:nbn:de:bvb:739-opus4-8717 Fakultät für Informatik und Mathematik OPUS4-859 Dissertation Niedermeier, Michael Towards High Performability in Advanced Metering Infrastructures The current movement towards a smart grid serves as a solution to present power grid challenges by introducing numerous monitoring and communication technologies. A dependable, yet timely exchange of data is on the one hand an existential prerequisite to enable Advanced Metering Infrastructure (AMI) services, yet on the other a challenging endeavor, because the increasing complexity of the grid fostered by the combination of Information and Communications Technology (ICT) and utility networks inherently leads to dependability challenges. To be able to counter this dependability degradation, current approaches based on high-reliability hardware or physical redundancy are no longer feasible, as they lead to increased hardware costs or maintenance, if not both. The flexibility of these approaches regarding vendor and regulatory interoperability is also limited. However, a suitable solution to the AMI dependability challenges is also required to maintain certain regulatory-set performance and Quality of Service (QoS) levels. While a part of the challenge is the introduction of ICT into the power grid, it also serves as part of the solution. In this thesis a Network Functions Virtualization (NFV) based approach is proposed, which employs virtualized ICT components serving as a replacement for physical devices. By using virtualization techniques, it is possible to enhance the performability in contrast to hardware based solutions through the usage of virtual replacements of processes that would otherwise require dedicated hardware. This approach offers higher flexibility compared to hardware redundancy, as a broad variety of virtual components can be spawned, adapted and replaced in a short time. Also, as no additional hardware is necessary, the incurred costs decrease significantly. In addition to that, most of the virtualized components are deployed on Commercial-Off-The-Shelf (COTS) hardware solutions, further increasing the monetary benefit. The approach is developed by first reviewing currently suggested solutions for AMIs and related services. Using this information, virtualization technologies are investigated for their performance influences, before a virtualized service infrastructure is devised, which replaces selected components by virtualized counterparts. Next, a novel model, which allows the separation of services and hosting substrates is developed, allowing the introduction of virtualization technologies to abstract from the underlying architecture. Third, the performability as well as monetary savings are investigated by evaluating the developed approach in several scenarios using analytical and simulative model analysis as well as proof-of-concept approaches. Last, the practical applicability and possible regulatory challenges of the approach are identified and discussed. Results confirm that—under certain assumptions—the developed virtualized AMI is superior to the currently suggested architecture. The availability of services can be severely increased and network delays can be minimized through centralized hosting. The availability can be increased from 96.82% to 98.66% in the given scenarios, while decreasing the costs by over 60% in comparison to the currently suggested AMI architecture. Lastly, the performability analysis of a virtualized service prototype employing performance analysis and a Musa-Okumoto approach reveals that the AMI requirements are fulfilled. 2020 xvii, 198 Seiten urn:nbn:de:bvb:739-opus4-8597 Fakultät für Informatik und Mathematik OPUS4-123 Dissertation Houyou, Amine Mohamed Context-Aware Mobility: A Distributed Approach to Context Management The recent development of a whole plethora of new wireless technologies, such as IEEE 802.11, IEEE 802.15, IEEE 802.16, UMTS, and more recently LTE, etc, has triggered several efforts to integrate these technologies in a converged world of transparent and ubiquitous wireless connectivity. Most of these technologies have evolved around a certain use case and with some user behaviour being assumed; however, there still lacks a holistic solution to adapt access to user needs, in an automatic and transparent manner. One major problem that has to be addressed first, is mobility management between heterogeneous wireless networks. Current mobility management solutions mostly originate from cellular networking systems, which are operator specific, centralised, and focused on a single link technology. In order to deal with the wireless diversity of future wireless and mobile Internet, a new approach is needed. Adaptive wireless connectivity that is tailored around the user needs and capabilities is named context-aware mobility management. Context refers to the information describing the surroundings of the user as well as his/her behaviour, and additional semantic information that could optimise the adaption process. Context management normally entails discovering and tracking context, reasoning based on the discovered information, then adapting (or acting) upon the context-aware application or system. This context management chain is adapted throughout the thesis to the task of context-aware mobility management. The added complexity is necessary to adapt the ubiquitous access to the condition of both the user and the surrounding networks, while assuming that overlapping wireless networks could still be managed in separate management domains. Linking these management domains and aggregating this composite information in the form of a network context is one of the major contributions of this work. An overlay-based solution takes into account this scattered nature of the context management system, which is modelled as a decentralised dynamic location-based service. The proposed architecture is generalised to support ubiquitous location-based services, and a design methodology is proposed to ensure the localised impact of mobility-led context retrieval overhead. 2009 urn:nbn:de:bvb:739-opus-17975 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-120 Dissertation Ogris, Georg Multi-modal on-body sensing of human activities Increased usage and integration of state-of-the-art information technology in our everyday work life aims at increasing the working efficiency. Due to unhandy human-computer-interaction methods this progress does not always result in increased efficiency, for mobile workers in particular. Activity recognition based contextual computing attempts to balance this interaction deficiency. This work investigates wearable, on-body sensing techniques on their applicability in the field of human activity recognition. More precisely we are interested in the spotting and recognition of so-called manipulative hand gestures. In particular the thesis focuses on the question whether the widely used motion sensing based approach can be enhanced through additional information sources. The set of gestures a person usually performs on a specific place is limited -- in the contemplated production and maintenance scenarios in particular. As a consequence this thesis investigates whether the knowledge about the user's hand location provides essential hints for the activity recognition process. In addition, manipulative hand gestures -- due to their object manipulating character -- typically start in the moment the user's hand reaches a specific place, e.g. a specific part of a machinery. And the gestures most likely stop in the moment the hand leaves the position again. Hence this thesis investigates whether hand location can help solving the spotting problem. Moreover, as user-independence is still a major challenge in activity recognition, this thesis investigates location context as a possible key component in a user-independent recognition system. We test a Kalman filter based method to blend absolute position readings with orientation readings based on inertial measurements. A filter structure is suggested which allows up-sampling of slow absolute position readings, and thus introduces higher dynamics to the position estimations. In such a way the position measurement series is made aware of wrist motions in addition to the wrist position. We suggest location based gesture spotting and recognition approaches. Various methods to model the location classes used in the spotting and recognition stages as well as different location distance measures are suggested and evaluated. In addition a rather novel sensing approach in the field of human activity recognition is studied. This aims at compensating drawbacks of the mere motion sensing based approach. To this end we develop a wearable hardware architecture for lower arm muscular activity measurements. The sensing hardware based on force sensing resistors is designed to have a high dynamic range. In contrast to preliminary attempts the proposed new design makes hardware calibration unnecessary. Finally we suggest a modular and multi-modal recognition system; modular with respect to sensors, algorithms, and gesture classes. This means that adding or removing a sensor modality or an additional algorithm has little impact on the rest of the recognition system. Sensors and algorithms used for spotting and recognition can be selected and fine-tuned separately for each single activity. New activities can be added without impact on the recognition rates of the other activities. 2009 urn:nbn:de:bvb:739-opus-17930 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-181 Konferenzveröffentlichung Proceedings of the 3rd International Workshop on Polyhedral Compilation Techniques IMPACT 2013 in Berlin, Germany (in conjuction with HiPEAC 2013) is the third workshop in a series of international workshops on polyhedral compilation techniques. The previous workshops were held in Chamonix, France (2011) in conjuction with CGO 2011 and Paris, France (2012) in conjuction with HiPEAC 2012. 2013 urn:nbn:de:bvb:739-opus-26930 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-179 Dissertation Matzeder, Marco Zeichnen von Bäumen auf Gittern Das Zeichnen von Graphen beschäftigt sich mit der Frage, wie die durch einen Graphen repräsentierten Informationen für einen Betrachter übersichtlich und verständlich dargestellt werden können. Die Graphklasse der Bäume dient insbesondere zur Repräsentation von hierarchischen Strukturen. Neben den hierarchisch und radial darstellenden Verfahren werden Bäume auch auf dem orthogonalen Gitter gezeichnet, in welchem die Knoten auf ganzzahligen Koordinaten liegen und die Kanten entlang der horizontalen und vertikalen Gitterlinien verlaufen. Gewünscht wird eine gute Lesbarkeit der Zeichnungen und deren effiziente Berechnung. Für die formale Bewertung der Lesbarkeit existieren speziell für das Zeichnen von Bäumen definierte Ästhetikkriterien, wie eine ebenenweise Darstellung, die Ordnungserhaltung und Kriterien zur Darstellung von Subgraphisomorphien und Symmetrien. Die vorliegende Arbeit befasst sich mit einer bislang wenig studierten Erweiterung des orthogonalen Gitters auf das hexagonale und oktagonale Gitter durch das Hinzunehmen von einer bzw. beider diagonalen Gitterrichtungen, und der Problemstellung, wie Bäume darauf gezeichnet werden. Dadurch können auch Bäume mit einem höheren Grad gezeichnet werden als auf dem orthogonalen Gitter. Die Einschränkung, dass nur Bäume gezeichnet werden können, deren Grad kleiner ist als die Anzahl der Gitterrichtungen des verwendeten Gitters, besteht jedoch weiterhin. Als Ästhetikkriterien werden die lokale Uniformität, die die Länge der ausgehenden Kanten eines Knotens festlegt, und Pattern, die deren Richtungen festlegen, eingeführt. Gegenüber dem bekannten linearen Flächenverbrauch von geradlinigen Zeichnungen von vollständigen Binärbäumen auf dem orthogonalen Gitter, werden für Zeichnungen von vollständigen d-nären Bäumen mit d > 2 nicht-lineare untere Schranken für die benötigte Fläche auf dem hexagonalen und dem oktagonalen Gitter gezeigt. Insgesamt werden für vollständige und beliebige, geordnete und ungeordnete Bäume obere und untere Flächenschranken für Zeichnungen auf dem hexagonalen und oktagonalen Gitter präsentiert. Dabei zeigt sich, dass bei nicht-ordnungserhaltenden Zeichnungen zwar mehr als lineare, aber deutlich weniger als quadratische Fläche benötigt wird. Im Gegensatz dazu gibt es geordnete Bäume, deren ordnungserhaltende Zeichnungen exponentielle Fläche benötigen. Des Weiteren wird die Ermittlung der minimalen Zeichenfläche für geordnete d-näre Bäume ebenso als NP-vollständig bewiesen, wie das Zeichnen von ungeordneten d-nären Bäumen mit einheitlichen Kantenlängen. Schließlich werden zwei Linearzeitalgorithmen vorgestellt, die geordnete d-näre Bäume unter Einhaltung der genannten Ästhetikkriterien zeichnen. 2012 urn:nbn:de:bvb:739-opus-26923 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-190 Dissertation Radde, Sven A Layered Conversational Recommender System In this thesis a new approach to building product recommender systems is introduced. By using a customer-centric dialogue, the customers' preferences are elicited. These are the basis for inferring utility estimations about the desired technical properties of the products in question. Systems built this way can both operate autonomously, e.g., in an online store, and support a salesperson directly at the point-of-sale. The core of the approach is formed by a layered domain description that models customer stereotypes and needs, product attributes, the products themselves, and the causal interrelations between customer and product properties. Maintenance of the domain description, i.e., keeping the model up-to-date in face of frequent changes, is facilitated by the clear separation of concerns provided by the layered structure. In fact, the most frequently used class of updates can be handled in an entirely automated way if some constraints are satisfied. On a high level of abstraction, the system behavior is described by State Charts that are parameterized according to the domain description. Those parts of the system description where State Charts would be too imprecise are implemented by separate components realizing the required complex semantics. From the domain description, a Bayesian network is generated that forms the core of the inference engine of the recommender system. The network essentially controls the system-initiated dialogue flow and the recommendation process. Due to the characteristics of Bayesian networks, it is possible to respond to user-initiated dialogue steps in a natural way. Moreover, an explanation of the current recommendation can be generated without having to explicitly encode additional information in the modeling layer. Finally, a database structure and the SQL queries necessary to obtain recommendations can be inferred from the corresponding parts of the domain description. Instantiation of the system to a specific business domain is supported by a dedicated maintenance application that hides the complexities of the underlying algorithms. Thus, day-to-day system updates by non-technical domain experts, e.g., product managers, are facilitated. The developed concepts were implemented in cooperation with a local industry partner who intends to apply the recommender system in the field of mobile communications. 2013 urn:nbn:de:bvb:739-opus-27031 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-111 Dissertation Oberender, Jens O. Widerstandsfähige Anonymisierungsnetze Unverkettbare Nachrichten sind ein Grundbaustein anonymer Kommunikation. Anonymisierungsnetze schützen mittels Unverkettbarkeit, wer mit wem kommuniziert sowie die Identität der Beteiligten einer Kommunikationsbeziehung. Anonymisierungsnetze benötigen Kooperation, da die Anonymität durch Ressourcen anderer Teilnehmer geschützt wird. Wenn die Kosten und der Nutzen eines Anonymisierungsnetzes transparent sind, ergeben sich Zielkonflikte zwischen rationalen Teilnehmern. Es wird daher untersucht, inwiefern daraus resultierendes egoistisches Verhalten die Widerstandsfähigkeit dieser Netze beeinträchtigt. Störungen werden in einem spieltheoretischen Modell untersucht, um widerstandsfähige Konfigurationen von Anonymisierungsnetzen zu ermitteln. Eine weitere Störquelle sind Überflutungsangriffe mittels unverkettbarer Nachrichten. Es soll sowohl die Verfügbarkeit als auch die Anonymität geschützt werden. Dazu wird Unverkettbarkeit für Nachrichten aufrecht erhalten, außer wenn die Senderate eines Nachrichtenstroms eine Richtlinie überschreitet. Innerhalb verkettbarer Nachrichten können Überflutungsangriffe erkannt werden. Darüber kann die Verfügbarkeit des Netzdienstes geschützt werden. 2009 urn:nbn:de:bvb:739-opus-16846 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-147 Dissertation Hölbling, Günther Personalized Means of Interacting with Multimedia Content Today the world of multimedia is almost completely device- and content-centered. It focuses it's energy nearly exclusively on technical issues such as computing power, network specifics or content and device characteristics and capabilities. In most multimedia systems, the presentation of multimedia content and the basic controls for playback are main issues. Because of this, a very passive user experience, comparable to that of traditional TV, is most often provided. In the face of recent developments and changes in the realm of multimedia and mass media, this "traditional" focus seems outdated. The increasing use of multimedia content on mobile devices, along with the continuous growth in the amount and variety of content available, make necessary an urgent re-orientation of this domain. In order to highlight the depth of the increasingly difficult situation faced by users of such systems, it is only logical that these individuals be brought to the center of attention. In this thesis we consider these trends and developments by applying concepts and mechanisms to multimedia systems that were first introduced in the domain of usercentrism. Central to the concept of user-centrism is that devices should provide users with an easy way to access services and applications. Thus, the current challenge is to combine mobility, additional services and easy access in a single and user-centric approach. This thesis presents a framework for introducing and supporting several of the key concepts of user-centrism in multimedia systems. Additionally, a new definition of a user-centric multimedia framework has been developed and implemented. To satisfy the user's need for mobility and flexibility, our framework makes possible seamless media and service consumption. The main aim of session mobility is to help people cope with the increasing number of different devices in use. Using a mobile agent system, multimedia sessions can be transferred between different devices in a context-sensitive way. The use of the international standard MPEG-21 guarantees extensibility and the integration of content adaptation mechanisms. Furthermore, a concept is presented that will allow for individualized and personalized selection and face the need for finding appropriate content. All of which can be done, using this approach, in an easy and intuitive way. Especially in the realm of television, the demand that such systems cater to the need of the audience is constantly growing. Our approach combines content-filtering methods, state-of-the-art classification techniques and mechanisms well known from the area of information retrieval and text mining. These are all utilized for the generation of recommendations in a promising new way. Additionally, concepts from the area of collaborative tagging systems are also used. An extensive experimental evaluation resulted in several interesting findings and proves the applicability of our approach. In contrast to the "lean-back" experience of traditional media consumption, interactive media services offer a solution to make possible the active participation of the audience. Thus, we present a concept which enables the use of interactive media services on mobile devices in a personalized way. Finally, a use case for enriching TV with additional content and services demonstrates the feasibility of this concept. 2011 urn:nbn:de:bvb:739-opus-24210 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-144 Dissertation Johns, Martin Code Injection Vulnerabilities in Web Applications - Exemplified at Cross-site Scripting The majority of all security problems in today's Web applications is caused by string-based code injection, with Cross-site Scripting (XSS)being the dominant representative of this vulnerability class. This thesis discusses XSS and suggests defense mechanisms. We do so in three stages: First, we conduct a thorough analysis of JavaScript's capabilities and explain how these capabilities are utilized in XSS attacks. We subsequently design a systematic, hierarchical classification of XSS payloads. In addition, we present a comprehensive survey of publicly documented XSS payloads which is structured according to our proposed classification scheme. Secondly, we explore defensive mechanisms which dynamically prevent the execution of some payload types without eliminating the actual vulnerability. More specifically, we discuss the design and implementation of countermeasures against the XSS payloads Session Hijacking'', Cross-site Request Forgery'', and attacks that target intranet resources. We build upon this and introduce a general methodology for developing such countermeasures: We determine a necessary set of basic capabilities an adversary needs for successfully executing an attack through an analysis of the targeted payload type. The resulting countermeasure relies on revoking one of these capabilities, which in turn renders the payload infeasible. Finally, we present two language-based approaches that prevent XSS and related vulnerabilities: We identify the implicit mixing of data and code during string-based syntax assembly as the root cause of string-based code injection attacks. Consequently, we explore data/code separation in web applications. For this purpose, we propose a novel methodology for token-level data/code partitioning of a computer language's syntactical elements. This forms the basis for our two distinct techniques: For one, we present an approach to detect data/code confusion on run-time and demonstrate how this can be used for attack prevention. Furthermore, we show how vulnerabilities can be avoided through altering the underlying programming language. We introduce a dedicated datatype for syntax assembly instead of using string datatypes themselves for this purpose. We develop a formal, type-theoretical model of the proposed datatype and proof that it provides reliable separation between data and code hence, preventing code injection vulnerabilities. We verify our approach's applicability utilizing a practical implementation for the J2EE application server. 2009 urn:nbn:de:bvb:739-opus-23626 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-138 Dissertation Guppenberger, Michael Enhancing Information Systems with Event-Handling - A Non-Invasive Approach Due to the immense advance of widely accessible information systems in industrial applications, science, education and every day use, it becomes more and more difficult for users of those information systems to keep track with new and updated information. An approach to cope with this problem is to go beyond traditional search facilities and instead use the users' profiles to monitor data changes and to actively inform them about these updates - an aspect that has to be explicitly developed and integrated into a variety of information systems. This is traditionally done in an individual way, depending on the application and its platform. In this dissertation, we present a novel approach to model the semantic interrelations that specify which users to inform about which updates, based on the underlying model of the respective information system. For the first time, a meta-model that allows information system designers to tag an arbitrary data model and thus specify the event-handling semantics is presented. A formal specification of how to interpret meta-models to determine the receivers of the events completes the presented concept. For the practical realization of this new concept, model driven architecture (MDA) shows to be an ideal technical means. Using our newly developed UML profile based on data-modelling standards, an implementation of the event-handling specification can automatically be generated for a variety of different target platforms, like e.g. relational databases, using triggers. This meta-approach makes the proposed solution ideal with respect to maintainability and genericity. Our solution significantly reduces the overall development efforts for an event-handling facility. In addition, the enhanced model of the information system can be used to generate an implementation that also fulfils non-functional requirements like high performance and extensibility. The overall framework, consisting of the domain specific language (i.e. the meta-model), formal and technical transformations of how to interpret the enhanced information system model and a cost-based optimizing strategy, constitutes an integrated approach, offering several advantages over traditional implementation techniques: our framework can be applied to new information systems as well as to legacy applications without having to modify existing systems; it offers an extensible, easy-to-use, generic and thus re-usable solution and it can be tailored to and optimized for many use cases, as the practical evaluation presented in this dissertation verifies. 2010 urn:nbn:de:bvb:739-opus-22485 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-150 Dissertation Rabl, Tilmann Efficiency in Cluster Database Systems - Dynamic and Workload-Aware Scaling and Allocation Database systems have been vital in all forms of data processing for a long time. In recent years, the amount of processed data has been growing dramatically, even in small projects. Nevertheless, database management systems tend to be static in terms of size and performance which makes scaling a difficult and expensive task. Because of performance and especially cost advantages more and more installed systems have a shared nothing cluster architecture. Due to the massive parallelism of the hardware programming paradigms from high performance computing are translated into data processing. Database research struggles to keep up with this trend. A key feature of traditional database systems is to provide transparent access to the stored data. This introduces data dependencies and increases system complexity and inter process communication. Therefore, many developers are exchanging this feature for a better scalability. However, explicitly managing the data distribution and data flow requires a deep understanding of the distributed system and reduces the possibilities for automatic and autonomic optimization. In this thesis we present an approach for database system scaling and allocation that features good scalability although it keeps the data distribution transparent. The first part of this thesis analyzes the challenges and opportunities for self-scaling database management systems in cluster environments. Scalability is a major concern of Internet based applications. Access peaks that overload the application are a financial risk. Therefore, systems are usually configured to be able to process peaks at any given moment. As a result, server systems often have a very low utilization. In distributed systems the efficiency can be increased by adapting the number of nodes to the current workload. We propose a processing model and an architecture that allows efficient self-scaling of cluster database systems. In the second part we consider different allocation approaches. To increase the efficiency we present a workload-aware, query-centric model. The approach is formalized; optimal and heuristic algorithms are presented. The algorithms optimize the data distribution for local query execution and balance the workload according to the query history. We present different query classification schemes for different forms of partitioning. The approach is evaluated for OLTP and OLAP style workloads. It is shown that variants of the approach scale well for both fields of application. The third part of the thesis considers benchmarks for large, adaptive systems. First, we present a data generator for cloud-sized applications. Due to its architecture the data generator can easily be extended and configured. A key feature is the high degree of parallelism that makes linear speedup for arbitrary numbers of nodes possible. To simulate systems with user interaction, we have analyzed a productive online e-learning management system. Based on our findings, we present a model for workload generation that considers the temporal dependency of user interaction. 2011 urn:nbn:de:bvb:739-opus-25821 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-141 Dissertation Berl, Andreas Energy Efficiency in Office Computing Environments The increasing cost of energy and the worldwide desire to reduce CO2 emissions has raised concern about the energy efficiency of information and communication technology. Whilst research has focused on data centres recently, this thesis identifies office computing environments as significant consumers of energy. Office computing environments offer great potential for energy savings: On one hand, such environments consist of a large number of hosts. On the other hand, these hosts often remain turned on 24~hours per day while being underutilised or even idle. This thesis analyzes the energy consumption within office computing environments and suggests an energy-efficient virtualized office environment. The office environment is virtualized to achieve flexible virtualized office resources that enable an energy-based resource management. This resource management stops idle services and idle hosts from consuming resources within the office and consolidates utilised office services on office hosts. This increases the utilisation of some hosts while other hosts are turned off to save energy. The suggested architecture is based on a decentralized approach that can be applied to all kinds of office computing environments, even if no centralized data centre infrastructure is available. The thesis develops the architecture of the virtualized office environment together with an energy consumption model that is able to estimate the energy consumption of hosts and network within office environments. The model enables the energy-related comparison of ordinary and virtualized office environments, considering the energy-efficient management of services. Furthermore, this thesis evaluates energy efficiency and overhead of the suggested approach. First, it theoretically proves the energy efficiency of the virtualized office environment with respect to the energy consumption model. Second, it uses Markov processes to evaluate the impact of user behaviour on the suggested architecture. Finally, the thesis develops a discrete-event simulation that enables the simulation and evaluation of office computing environments with respect to varying virtualization approaches, resource management parameters, user behaviour, and office equipment. The evaluation shows that the virtualized office environment saves more than half of the energy consumption within office computing environments, depending on user behaviour and office equipment. 2011 urn:nbn:de:bvb:739-opus-22516 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-305 Dissertation Ehlers, Christoph Top-k Semantic Caching The subject of this thesis is the intelligent caching of top-k queries in an environment with high latency and low throughput. In such an environment, caching can be used to reduce network traffic and improve response time. Slow database connections of mobile devices and to databases, which have been offshored, are practical use cases. A semantic cache is a query-based cache that caches query results and maintains their semantic description. It reuses partial matches of previous query results. Each query that is processed by the semantic cache is split into two disjoint parts: one that can be completely answered with tuples of the cache probe query, and another that requires tuples to be transferred from the server (remainder query). Existing semantic caches do not support top-k queries, i.e., ordered and limited queries. In this thesis, we present an innovative semantic cache that naturally supports top-k queries. The support of top-k queries in a semantic cache has considerable effects on cache elements, operations on cache elements -- like creation, difference, intersection, and union -- and query answering. Hence, we introduce new techniques for cache management and query processing. They enable the semantic cache to become a true top-k semantic cache. In addition, we have developed a new algorithm that can estimate the lower bounds of query results of sorted queries using multidimensional histograms. Using this algorithm, our top-k semantic cache is able to pipeline partial query results of top-k queries. Thereby, query execution performance can be significantly increased. We have implemented a prototype of a top-k semantic cache called IQCache (Intelligent Query Cache). An extensive and thorough evaluation with various benchmarks using our prototype demonstrates the applicability and performance of top-k semantic caching in practice. The experiments prove that the top-k semantic cache invariably outperforms simple hash-based caching strategies and scales very well. 2015 266 urn:nbn:de:bvb:739-opus4-3055 Fakultät für Informatik und Mathematik OPUS4-293 Dissertation Limam, Lyes Usage-Driven Unified Model for User Profile and Data Source Profile Extraction This thesis addresses a problem related to usage analysis in information retrieval systems. Indeed, we exploit the history of search queries as support of analysis to extract a profile model. The objective is to characterize the user and the data source that interact in a system to allow different types of comparison (user-to-user, sourceto- source, user-to-source). According to the study we conducted on the work done on profile model, we concluded that the large majority of the contributions are strongly related to the applications within they are proposed. As a result, the proposed profile models are not reusable and suffer from several weaknesses. For instance, these models do not consider the data source, they lack of semantic mechanisms and they do not deal with scalability (in terms of complexity). Therefore, we propose a generic model of user and data source profiles. The characteristics of this model are the following. First, it is generic, being able to represent both the user and the data source. Second, it enables to construct the profiles in an implicit way based on histories of search queries. Third, it defines the profile as a set of topics of interest, each topic corresponding to a semantic cluster of keywords extracted by a specific clustering algorithm. Finally, the profile is represented according to the vector space model. The model is composed of several components organized in the form of a framework, in which we assessed the complexity of each component. The main components of the framework are: • a method for keyword queries disambiguation • a method for semantically representing search query logs in the form of a taxonomy; • a clustering algorithm that allows fast and efficient identification of topics of interest as semantic clusters of keywords; • a method to identify user and data source profiles according to the generic model. This framework enables in particular to perform various tasks related to usage-based structuration of a distributed environment. As an example of application, the framework is used to the discovery of user communities, and the categorization of data sources. To validate the proposed framework, we conduct a series of experiments on real logs from the search engine AOL search, which demonstrate the efficiency of the disambiguation method in short queries, and show the relation between the quality based clustering and the structure based clustering. 2014 160 urn:nbn:de:bvb:739-opus4-2936 Fakultät für Informatik und Mathematik OPUS4-509 Dissertation Wendler, Philipp Towards Practical Predicate Analysis Software model checking is a successful technique for automated program verification. Several of the most widely used approaches for software model checking are based on solving first-order-logic formulas over predicates using SMT solvers, e.g., predicate abstraction, bounded model checking, k-induction, and lazy abstraction with interpolants. We define a configurable framework for predicate-based analyses that allows expressing each of these approaches. This unifying framework highlights the differences between the approaches, producing new insights, and facilitates research of further algorithms and their combinations, as witnessed by several research projects that have been conducted on top of this framework. In addition to this theoretical contribution, we provide a mature implementation of our framework in the software verifier that allows applying all of the mentioned approaches to practice. This implementation is used by other research groups, e.g., to find bugs in the Linux kernel, and has proven its competitiveness by winning gold medals in the International Competition on Software Verification. Tools and approaches for software model checking like our predicate analysis are typically evaluated using performance benchmarking on large sets of verification tasks. We have identified several pitfalls that can silently arise during benchmarking, and we have found that the benchmarking techniques and tools that are used by many researchers do not guarantee valid results in practice, but may produce arbitrarily large measurement errors. Furthermore, certain hardware characteristics can also have nondeterministic influence on the measurements. In order to being able to properly evaluate our framework for software verification, we study the effects of these hardware characteristics, and define a list of the most important requirements that need to be ensured for reliable benchmarking. We present as solution an open-source benchmarking framework BenchExec, which in contrast to other benchmarking tools fulfills all our requirements and aims at making reliable benchmarking easy. BenchExec was already adopted by several research groups and the International Competition on Software Verification. Using the power of BenchExec we conduct an experimental evaluation of our unifying framework for predicate analysis. We study the effect of varying the SMT solver and the way program semantics are encoded in formulas across several verification algorithms and find that these technical choices can significantly influence the results of experimental studies of verification approaches. This is valuable information for both researchers who study verification approaches as well as for users who apply them in practice. Our comprehensive study of 120 different configurations would not have been possible without our highly flexible and configurable unifying framework for predicate analysis and shows that the latter is a valuable base for conducting experiments. Furthermore, we show using a comparison against top-ranking verifiers from the International Competition on Software Verification that our implementation is highly competitive and can outperform the state of the art. 2017 xii, 212 Seiten urn:nbn:de:bvb:739-opus4-5098 Fakultät für Informatik und Mathematik OPUS4-327 Dissertation Nagler, Johannes Digital Curvature Estimation: An Operator Theoretic Approach This thesis is divided into two parts. The first part is devoted to the curvature estimation of piecewise smooth curves using variation diminishing splines. The variation diminishing property combined with the ability to reconstruct linear functions leads to a convexity preserving approximation that is crucial if additional sign changes in the curvature estimation have to be avoided. To this end, we will first establish the foundations of variation diminishing transforms and introduce the Bernstein and the Schoenberg operator on the space of continuous functions and its generalization to the Lp-spaces. In order to be able to detect C2-singularities in piecewise smooth curves, we establish lower estimates for the approximation error in terms of the second order modulus of smoothness for Schoenberg's variation diminishing operator. Afterwards, we consider smooth curve approximations using only finitely many samples of the curve, where the approximation, its first, and its second derivative converge uniformly to its corresponding part of the curve to be approximated. In this case, we can show that the estimated curvature converges uniformly to the real curvature if the number of samples goes to infinity. Based on the lower estimates that relates the decay rate of the approximation error with smoothness we propose a multi-scale algorithm to estimate the curvature and to detect C2-singularities. We numerically evaluate our algorithm and compare it to others to show that our algorithm achieves competitive accuracy while our curvature estimations are significantly faster to compute. The second part deals with generalizations of the established lower estimates for the Schoenberg operator. We will show that such estimates can be obtained for linear operators on a general Banach function space with smooth range provided that the iterates of the operator converge uniformly and a semi-norm defined on the range of the operator annihilates the fixed points of the operator. To this end, we will prove by spectral properties that the iterates of every positive finite-rank operator converge uniformly. As highlight of this thesis, we show a constructive way using a Gramian matrix where the dual fixed points operate on the fixed points of an operator to derive the limit of the iterates for an arbitrary quasi-compact operator defined on a general Banach space. 2015 164 urn:nbn:de:bvb:739-opus4-3276 Fakultät für Informatik und Mathematik OPUS4-311 misc Groening, Sascha Spotzuordnung und Wellenfrontrekonstruktion für Shack-Hartmann-Sensoren Abstract zum Dissertationsfragment "Spotzuordnung und Wellenfront-Rekonstruktion für Shack-Hartmann-Sensoren" von Sascha Groening (20.04.1972 - 16.11.2001) Sascha Groening war nach seinem Studium der Informatik an der Universität Passau von 01.Oktober 1998 bis 16. November 2001 wissenschaftlicher Mitarbeiter der Forschungsgruppe Entscheidungsunterstützende Systeme innerhalb des Forschungsverbundes Wissensbasierte Systeme, die 2005 in das Institut für Softwaresystem in technischen Anwendungen der Informatik an der Universität Passau überging. Tragischerweise ist er am 16. November 2001 im Alter von nur 28 Jahren kurz vor der Fertigstellung seiner Dissertation völlig überraschend verstorben. In seiner Dissertation beschäftigte er sich mit Fragestellungen aus dem Teilprojekt „Entwicklung eines Messverfahrens mit hohem Dynamikbereich für die Qualitätssicherung von optischen Asphären für die in-situ Messung von Wellenfronten (kurz Wellensensor)" des Forschungsverbundes FORMIKROSYS II, der von der Bayerischen Forschungsstiftung gefördert wurde. Auf den Sicherungen des Instituts konnten wir nur eine relativ unvollständige elektronische Version seiner Dissertation „Spotzuordnung und Wellenfrontrekonstruktion für Shack-Hartmann-Sensoren" finden, da er seine Arbeit auf einem privaten PC anfertigte.    Monika Groening, seine Mutter, hat dagegen im Jahr 2013 privat unten stehenden Ausdruck seiner schriftlichen Arbeit gefunden, die einem Stand etwa zwei Wochen vor der geplanten Abgabe entspricht und die nun hiermit als Scan der Öffentlichkeit zur Verfügung gestellt wird.    Am Institut wurden nach Abschluss des Projektes Wellensensor die Forschungen auf diesem Gebiet nicht weiter verfolgt, da die Problemstellung des Projektes für die Partner, die die Lösung von Sascha Groening angewendet haben, zufriedenstellend gelöst war. Im Rahmen des Projektes wurden die Leistungsgrenzen für ein Messgerät zur Vermessung von optischen Asphären (z.B. Gleitsicht-Brillengläsern) und allgemeinen Wellenfronten (z.B. Strahlprofil eines Lasers, Wellenfront hinter einem optischen Subsystem) auf der Basis des Shack-Hartmann-Sensors erforscht. Das Prinzip beruht auf der geometrisch-optischen Bestimmung lokaler Wellenfrontkrümmungen mit Hilfe eines Feldes von Mikrolinsen und einer CCD-Kamera in der Fokalebene der Mikrolinsen. Sascha Groening entwickelte völlig neu konzipierte Auswertealgorithmen, die eine schnelle und zuverlässige Zuordnung von Fokuspunkten zu Mikrolinsen durchführen und damit eine hochgenaue Vermessung von Ashären bei stark aberranten Wellenfronten möglich machen. Eine einfallende Wellenfront erzeugt in der Fokalebene der Mikrolinsen ein charakteristisches Spotmuster. Durch die Analyse der lokalen Ablenkungen der Spots von ihren Idealpositionen, also den Positionen, die bei Einfall einer ebenen Wellenfront entstehen würden, können Aussagen über das lokale Steigungsverhalten der einfallenden Wellenfront getroffen werden. Je größer der Dynamikbereich des Messgerätes sein soll, desto schwieriger wird das Problem der Zuordnung von Spot zu Mikrolinse. Genau diese Herausforderung wurde von Sascha Gröning durch einen iterativen Spline-Passungs-Algorithmus schnell und elegant gelöst. Wenn dann die Spotzuordnung erfolgt ist, kann die Wellenfront aus den lokalen Ablenkungen der Spots rekonstruiert werden. Das Fragment der Dissertationsschrift ist vollständig bis auf Kapitel 1 Einleitung und Kapitel 7 Zusammenfassung. Außerdem fehlen in Kapitel 4 noch die Beschreibungen einiger untersuchter Verfahren zur Spotdetektion und in den Unterkapiteln von 5 und 6 fehlen die Verfahren zur Spotzuordnung und Wellenfrontrekonstruktion bei nicht stetig differenzierbaren bzw. unstetigen Wellenfronten. Ansonsten bietet die Arbeit aber einen guten Überblick über den Stand der Technik zur damaligen Zeit und erklärt die grundlegenden Verfahren, die von Sascha Groening erforscht und entwickelt wurden. In Kapitel 2 werden ausführlich das Funktionsprinzip des Shack-Hartmann-Sensors und die Wellenfrontvermessung erklärt, wobei speziell auf das Zuordnungsproblem von Spot zu Mikrolinse und auf die globale Wellenfrontrekonstruktion aus einem Feld partieller Ableitungen eingegangen wird.    Die mathematischen Grundlagen wie Tensorprodukt-Splines oder die Lösungsverfahren für lineare Ausgleichsprobleme ohne und mit linearen Nebenbedingungen werden in Kapitel 3 gelegt. Einen ersten Schwerpunkt bildet Kapitel 4, das sich mit der Spotdetektion beschäftigt. Nach der Erläuterung der optischen Grundlagen wird die Spotentstehung mathematisch modelliert, um so geeignete Detektionsverfahren ableiten zu können.    Im zentralen Kapitel 5 der Dissertation geht es um die Spotzuordnung. Nach der Vorstellung bekannter Verfahren wird die neu entwickelte Spotzuordnung durch iterative Funktionspassung beschrieben. Nach einer initialen Zuordnung von einigen zentralen Spots, die unter Nutzung von Nebenwissen problemlos möglich ist, wird dann iterativ mit Hilfe von Splinepassungen vorgegangen. Bei einer vorhandenen Zuordnung wird durch Extrapolation der berechneten Splinepassung der Suchbereich für weitere, nicht zugeordnete Spots ausgedehnt, und diese werden dann wiederum korrekt zugeordnet. Diese Schritte werden iterativ wiederholt, bis alle Spots zugeordnet sind. Die Vorgehensweise, von bekanntem in unbekanntes Gebiet zu extrapolieren führt hier vortrefflich zum Ziel. Sascha Groening zeigt auch, dass der Erfolg vom gewählten Funktionsraum, hier Tensorprodukt-Splines, abhängt. Schließlich runden die Verfahren zur Wellenfrontrekonstruktion, die ebenfalls auf Spline-Passungen basieren, in Kapitel 6 die Arbeit ab. Mit der Bereitstellung dieses Fragments seiner Dissertation wollen wir die geleistete Arbeit von Sascha Groening an der Universität Passau würdigen und hoffen, dass seine Ergebnisse ihren Platz in der wissenschaftlichen Gemeinschaft finden. Im Mai 2015, Dr. Erich Fuchs 2001 urn:nbn:de:bvb:739-opus4-3110 Fakultät für Informatik und Mathematik OPUS4-582 Dissertation Pöhls, Henrich C. Increasing the Legal Probative Value of Cryptographically Private Malleable Signatures Die Arbeit befasst sich mit der Erarbeitung von technischen Vorgaben und deren Umsetzung in kryptographisch sichere Verfahren von datenschutzfreundlichen, veränderbaren digitalen Signaturverfahren (private malleable signature schemes oder MSS) zur Erlangung möglichst hoher rechtlicher Evidenz. Im Recht werden bestimmte kryptographische Algorithmen, Schlüssellängen und deren korrekte organisatorische Anwendungen zur Erzeugung elektronisch signierter Dokumente als rechtssicher eingestuft. Dies kann zu einer Beweiserleichterung mithilfe signierter Dokumente führen. So gelten nach Verordnung (EU) Nr. 910/2014 (eIDAS) qualifiziert signierte elektronische Dokumente entweder als Anscheinsbeweis der Echtheit oder ihnen wird gar eine gesetzliche Vermutung der Echtheit zuteil. Gesetzlich anerkannte technische Verfahren, die einen solch erhöhten Beweiswert erreichen, erfüllen mithilfe von Kryptographie im wesentlichen zwei Eigenschaften: Integritätsschutz (integrity), also die Erkennung der Abwesenheit von unerwünschten Änderungen und die Zurechenbarkeit des unveränderten Dokumentes zum Signaturersteller (accountability). Hingegen ist der größte Vorteil veränderbarer digitaler Signaturverfahren (MSS) die „privacy" genannte Eigenschaft: Eine autorisierte Änderung verbirgt den vorherigen Inhalt. Des Weiteren bleibt die Signatur solange valide wie ausschliesslich autorisierte Änderungen vorgenommen werden. Wird diese Eigenschaft kryptographisch nachweislich sicher erfüllt, so spricht man von einem private malleable signature scheme. In der Arbeit werden zwei verbreitete Formen, die sogenannten redactable signature schemes (RSS) und die sanitizable signature schemes (SSS), eingehend betrachtet. Diese erlauben vielfältige Einsatzmöglichkeiten, zum Beispiel eine autorisierte spätere Veränderung zur Wahrung von Geschäftsgeheimnissen oder zum Datenschutz: Der Unterzeichner delegiert so beispielsweise über ein private redactable signature scheme nur das nachträgliche Schwärzen (redaction). Dies schränkt die Veränderbarkeit auf das Entfernen von Informationen ein, erlaubt aber wirksam die Wahrung des Datenschutzes oder den Schutz von (Geschäfts)geheimnissen indem diese Informationen irreversibel für Angreifer entfernt werden. Die kryptographische privacy Eigenschaft besagt, dass es nun nicht mehr effizient möglich ist, aus dem geschwärzten Dokument Wissen über die geschwärzten Informationen zu erlangen, auch und gerade nicht für den Signaturprüfer. Die Arbeit geht im Kern der Frage nach, ob ein MSS sowohl die kryptographische Eigenschaft „privacy" als auch gleichzeitig die Eigenschaften „integrity" und „accountability" mit ausreichend hohen Sicherheitsniveaus erfüllen kann. Das Ziel ist es, dass ein MSS gleichzeitig ein solch ausreichend hohen Grad an Sicherheit erfüllt, dass (1) die autorisierten nachträglichen Änderungen zum Schutze von Geschäftsgeheimnissen oder personenbezogenen Daten eingesetzt werden können, und dass (2) dem Dokument, welches mit dem speziellen Signaturverfahren signiert wurde, ein erhöhter Beweiswert beigemessen werden kann. In Bezug auf letzteres stellt die Arbeit sowohl die technischen Vorgaben, welche für qualifizierte elektronische Signaturen (nach Verordnung (EU) Nr. 910/2014) gelten, in Bezug auf die nachträgliche Änderbarkeit dar, als auch konkrete kryptographische Eigenschaften und Verfahren um diese Vorgaben kryptographisch beweisbar zu erreichen. Insbesondere weisen veränderbare Signaturen (MSS) einen anderen Integritätsschutz als traditionelle digitale Signaturen auf: Eine signierte Nachricht darf nachträglich durch eine definierte dritte Partei in einer definierten Art modifiziert werden. Diese sogenannte autorisierte Änderung (authorized modification) kann auch ohne Kenntnis des geheimen Signaturschlüssels des Unterzeichners durchgeführt werden. Bei der Verifikation der digitalen Signatur durch den Signaturprüfer bleibt der ursprüngliche Signierende und dessen Einwilligung zur autorisierten Änderung kryptographisch verifizierbar, auch wenn autorisierte Änderungen vorgenommen wurden. Die Arbeit umfasst folgende Bereiche: 1. Analyse der Rechtsvorgaben zur Ermittlung der rechtlich relevanten technischen Anforderungen hinsichtlich des geforderten Integritätsschutzes (integrity protection) und hinsichtlich des Schutzes von personenbezogenen Daten und (Geschäfts)geheimnissen (privacy protection), 2. Definition eines geeigneten Integritäts-Begriffes zur Beschreibung der Schutzfunktion von existierenden malleable signatures und bereits rechtlich anerkannten Signaturverfahren, 3. Harmonisierung und Analyse der kryptographischen Eigenschaften existierender malleable signature Verfahren in Hinblick auf die rechtlichen Anforderungen, 4. Entwicklung neuer und beweisbar sicherer kryptographischer Verfahren, 5. abschließende Bewertung des rechtlichen Beweiswertes (probative value) und des Datenschutzniveaus anhand der technischen Umsetzung der rechtlichen Anforderungen. Die Arbeit kommt zu dem Ergebnis, dass zunächst einmal jedwede (autorisierte wie auch unautorisierte) Änderung von einem kryptographisch sicheren malleable signature Verfahren (MSS) ebenfalls erkannt werden muss um Konformität mit Verordnung (EU) Nr. 910/2014 (eIDAS) zu erlangen. Eine solche Änderungserkennung durch die der Signaturprüfer, ohne Zuhilfe weiterer Parteien oder Geheimnisse, die Abwesenheit von autorisierten und unautorisierten Änderungen erkennt wurde im Rahmen der Arbeit entwickelt (non-interactive public accountability (PUB)). Diese neue kryptographische Eigenschaft wurde veröffentlicht und bereits von Arbeiten Anderer aufgegriffen. Des Weiteren werden neue kryptographische Eigenschaften und redactable signature und sanitizable signature Verfahren vorgestellt, welche zusätzlich zu dieser Änderungerkennung einen starken Schutz gegen die Aufdeckung des Orginals ermöglichen. Werden geeignete Eigenschaften erfüllt so wird für bestimmte Fälle ein technisches Schutzniveau erzielt, welches mit klassischen Signaturen vergleichbar ist. Damit lässt sich die Kernfrage positiv beantworten: Private MSS können ein Integritätsschutzniveau erreichen, welches dem rechtlich anerkannter digitaler Signaturen technisch entspricht, aber dennoch nachträgliche Änderungen autorisieren kann, welche einen starken Schutz gegen die Wiederherstellung des Orginals ermöglichen. 2018 540, LXXVIII Seiten urn:nbn:de:bvb:739-opus4-5823 Fakultät für Informatik und Mathematik OPUS4-222 Dissertation Meixner, Britta Annotated Interactive Non-linear Video - Software Suite, Download and Cache Management Modern Web technology makes the dream of fully interactive and enriched video come true. Nowadays it is possible to organize videos in a non-linear way playing in a sequence unknown in advance. Furthermore, additional information can be added to the video, ranging from short descriptions to animated images and further videos. This affords an easy and efficient to use authoring tool which is capable of the management of the single media objects, as well as a clear arrangement of the links between the parts. Tools of this kind can be found rarely and do mostly not provide the full range of needed functions. While providing an interactive experience to the viewer in the Web player, parallel plot sequences and additional information lead to an increased download volume. This may cause pauses during playback while elements have to be downloaded which are displayed with the video. A good quality of experience for these videos with small waiting times and a playback without interruptions is desired. This work presents the SIVA Suite to create the previously described annotated interactive non-linear videos. We propose a video model for interactivity, non-linearity, and annotations, which is implemented in an XML format, an authoring tool, and a player. Video is the main medium, whereby different scenes are linked to a scene graph. Time controlled additional content called annotations, like text, images, audio files, or videos, is added to the scenes. The user is able to navigate in the scene graph by selecting a button at a button panel. Furthermore, other navigational elements like a table of contents or a keyword search are provided. Besides the SIVA Suite, this thesis presents algorithms and strategies for download and cache management to provide a good quality of experience while watching the annotated interactive non-linear videos. Therefor, we implemented a standard-independent player framework. Integrated into a simulation environment, the framework allows to evaluate algorithms and strategies for the calculation of start-up times, and the selection of elements to pre-fetch into and delete from the cache. Their interaction during the playback of non-linear video contents can be analyzed. The algorithms and strategies can be used to minimize interruptions in the video flow after user interactions. Our extensive evaluation showed that our techniques result in faster start-up times and lesser interruptions in the video flow than those of other players. Knowledge of the structure of an interactive non-linear video can be used to minimize the start-up time at the beginning of a video while minimizing an increase in the overall download volume. 2014 urn:nbn:de:bvb:739-opus-27403 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-219 Dissertation EL-Khoury, Vanessa Semantic Protection and Personalization of Video Content. PIAF: MPEG Compliant Adaptation Framework Preserving the User Perceived Quality UME is the notion that a user should receive informative adapted content anytime and anywhere. Personalization of videos, which adapts their content according to user preferences, is a vital aspect of achieving the UME vision. User preferences can be translated into several types of constraints that must be considered by the adaptation process, including semantic constraints directly related to the content of the video. To deal with these semantic constraints, a fine-grained adaptation, which can go down to the level of video objects, is necessary. The overall goal of this adaptation process is to provide users with adapted content that maximizes their Quality of Experience (QoE). This QoE depends at the same time on the level of the user's satisfaction in perceiving the adapted content, the amount of knowledge assimilated by the user, and the adaptation execution time. In video adaptation frameworks, the Adaptation Decision Taking Engine (ADTE), which can be considered as the "brain" of the adaptation engine, is responsible for achieving this goal. The task of the ADTE is challenging as many adaptation operations can satisfy the same semantic constraint, and thus arising in several feasible adaptation plans. Indeed, for each entity undergoing the adaptation process, the ADTE must decide on the adequate adaptation operator that satisfies the user's preferences while maximizing his/her quality of experience. The first challenge to achieve in this is to objectively measure the quality of the adapted video, taking into consideration the multiple aspects of the QoE. The second challenge is to assess beforehand this quality in order to choose the most appropriate adaptation plan among all possible plans. The third challenge is to resolve conflicting or overlapping semantic constraints, in particular conflicts arising from constraints expressed by owner's intellectual property rights about the modification of the content. In this thesis, we tackled the aforementioned challenges by proposing a Utility Function (UF), which integrates semantic concerns with user's perceptual considerations. This UF models the relationships among adaptation operations, user preferences, and the quality of the video content. We integrated this UF into an ADTE. This ADTE performs a multi-level piecewise reasoning to choose the adaptation plan that maximizes the user-perceived quality. Furthermore, we included intellectual property rights in the adaptation process. Thereby, we modeled content owner constraints. We dealt with the problem of conflicting user and owner constraints by mapping it to a known optimization problem. Moreover, we developed the SVCAT, which produces structural and high-level semantic annotation according to an original object-based video content model. We modeled as well the user's preferences proposing extensions to MPEG-7 and MPEG-21. All the developed contributions were carried out as part of a coherent framework called PIAF. PIAF is a complete modular MPEG standard compliant framework that covers the whole process of semantic video adaptation. We validated this research with qualitative and quantitative evaluations, which assess the performance and the efficiency of the proposed adaptation decision-taking engine within PIAF. The experimental results show that the proposed UF has a high correlation with subjective video quality evaluation. 2014 urn:nbn:de:bvb:739-opus-27360 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-221 Dissertation Mousselly Sergieh, Hatem Search-based Automatic Image Annotation Using Geotagged Community Photos In the Web 2.0 era, platforms for sharing and collaboratively annotating images with keywords, called tags, became very popular. Tags are a powerful means for organizing and retrieving photos. However, manual tagging is time consuming. Recently, the sheer amount of user-tagged photos available on the Web encouraged researchers to explore new techniques for automatic image annotation. The idea is to annotate an unlabeled image by propagating the labels of community photos that are visually similar to it. Most recently, an ever increasing amount of community photos is also associated with location information, i.e., geotagged. In this thesis, we aim at exploiting the location context and propose an approach for automatically annotating geotagged photos. Our objective is to address the main limitations of state-of-the-art approaches in terms of the quality of the produced tags and the speed of the complete annotation process. To achieve these goals, we, first, deal with the problem of collecting images with the associated metadata from online repositories. Accordingly, we introduce a strategy for data crawling that takes advantage of location information and the social relationships among the contributors of the photos. To improve the quality of the collected user-tags, we present a method for resolving their ambiguity based on tag relatedness information. In this respect, we propose an approach for representing tags as probability distributions based on the algorithm of Laplacian score feature selection. Furthermore, we propose a new metric for calculating the distance between tag probability distributions by extending Jensen-Shannon Divergence to account for statistical fluctuations. To efficiently identify the visual neighbors, the thesis introduces two extensions to the state-of-the-art image matching algorithm, known as Speeded Up Robust Features (SURF). To speed up the matching, we present a solution for reducing the number of compared SURF descriptors based on classification techniques, while the accuracy of SURF is improved through an efficient method for iterative image matching. Furthermore, we propose a statistical model for ranking the mined annotations according to their relevance to the target image. This is achieved by combining multi-modal information in a statistical framework based on Bayes' rule. Finally, the effectiveness of each of mentioned contributions as well as the complete automatic annotation process are evaluated experimentally. 2014 urn:nbn:de:bvb:739-opus-27387 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-176 Dissertation Hofmeier, Andreas Vergleichen und Aggregieren von partiellen Ordnungen Das Vergleichen und Aggregieren von Informationen ist ein zentraler Bereich in der Analyse von Wahlsystemen. In diesen müssen die verschiedenen Meinungen von Wählern über eine Menge von Kandidaten zu einem möglichst gerechten Wahlergebnis aggregiert werden. In den meisten politischen Wahlen entscheidet sich jeder Wähler durch Ankreuzen für einen einzigen Kandidaten. Daneben werden aber auch Rangordnungsprobleme als eine Variante von Wahlsystemen untersucht. Bei diesen bringt jeder Wähler seine Meinung in Form einer totalen Ordnung über der Menge der Kandidaten zum Ausdruck, wodurch seine oftmals komplexe Meinung exakter repräsentiert werden kann als durch die Auswahl eines einzigen, favorisierten Kandidaten. Das Wahlergebnis eines Rangordnungsproblems ist dann eine ebenfalls totale Ordnung der Kandidaten, welche die geringste Distanz zu den Meinungen der Wähler aufweist. Als Distanzmaße zwischen zwei totalen Ordnungen haben sich neben anderen Kendalls Tau-Distanz und Spearmans Footrule-Distanz etabliert. Durch moderne Anwendungsmöglichkeiten von Rangordnungsproblemen im maschinellen Lernen, in der künstlichen Intelligenz, in der Bioinformatik und vor allem in verschiedenen Bereichen des World Wide Web rücken bereits bekannte, jedoch bislang eher wenig studierte Aspekte in den Fokus der Forschung. Zum einen gewinnt die algorithmische Komplexität von Rangordnungsproblemen an Bedeutung. Zum anderen existieren in vielen dieser Anwendungen unvollständige „Wählermeinungen" mit unentschiedenen oder unvergleichbaren Kandidaten, so dass totale Ordnungen zu deren Repräsentation nicht länger geeignet sind. Die vorliegende Arbeit greift diese beiden Aspekte auf und betrachtet die algorithmische Komplexität von Rangordnungsproblemen, in denen Wählermeinungen anstatt durch totale Ordnungen durch schwache oder partielle Ordnungen repräsentiert werden. Dazu werden Kendalls Tau-Distanz und Spearmans Footrule-Distanz auf verschiedene nahe liegende Arten verallgemeinert. Es zeigt sich dabei, dass nun bereits die Distanzberechnung zwischen zwei Ordnungen ein algorithmisch komplexes Problem darstellt. So ist die Berechnung der verallgemeinerten Versionen von Kendalls Tau-Distanz oder Spearmans Footrule-Distanz für schwache Ordnungen noch effizient möglich. Sobald jedoch partielle Ordnungen betrachtet werden, sind die Probleme NP-vollständig, also vermutlich nicht mehr effizient lösbar. In diesem Fall werden Resultate zur Approximierbarkeit und zur parametrisierten Komplexität der Probleme vorgestellt. Auch die Komplexität der Rangordnungsprobleme selbst erhöht sich. Für totale Ordnungen effizient lösbare Varianten werden für schwache Ordnungen NP-vollständig, für totale Ordnungen NP-vollständige Varianten hingegen liegen für partielle Ordnungen teilweise außerhalb der Komplexitätsklasse NP. Die Arbeit schließt mit einem Ausblick auf offene Problemstellungen. 2012 urn:nbn:de:bvb:739-opus-26858 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-369 Dissertation Herkenhöner, Ralph Tackling cloud compliance through information flow control IT outsourcing to clouds bears new challenges to the technical implementation of legally compliant clouds. On the one hand, outsourcing companies have to comply with legal requirements. On the other hand, cloud providers have to support their customers in achieving compliance with these legal requirements when processing data in the cloud. Consequently, the questions arise when IT outsourcing to clouds is lawful, which legal requirements apply to data processing in clouds, and how cloud providers can support their customers on achieving legal compliance. In this thesis, answers to these questions are given by performing a legal analysis identifying the legal requirements and a technical analysis identifying how legal requirements can be addressed in the context of cloud computing. Further, an information flow analysis is done, resulting in a system theoretical model that is able to describe information flow control in clouds based on the security classification of virtual resources and hardware resources. In a proof-of-concept implementation which is based on the OpenStack open-source cloud platform, it is shown that information flow control can be implemented as a part of cloud management and that legal compliance can be monitored and reported based on the actual assignment of virtual resources to hardware resources. Thereby, cloud providers are able to provide cloud customers with cloud resources, which are automatically assigned to hardware resources that comply with the legal requirements of the cloud customers. This consequently empowers cloud customers to utilise cloud resources according to their legal requirements and to keep control of managing the legal compliance of their data processing in clouds. 2015 271 urn:nbn:de:bvb:739-opus4-3696 Fakultät für Informatik und Mathematik OPUS4-422 Dissertation Jiang, Jie Delay Testing in Nanoscale Technology under Process Variations In modern CMOS technology, process variations have significantly increased impact on the circuit behavior with continuously scaled transistor sizes. Manufactured devices tend to have different performances due to parameter variations during manufacturing and in the operating context. Conventional tests generated regardless of variations could fail to rule out devices with low performance and even functional failure caused by extreme variations; the unreliability in shipped products is in turn raised. To tackle the problem, many existing test approaches have focused on identifying and testing a number of critical paths in the circuit, and aimed at the efficiency of the searching process. However, the statistical circuit model, which better describes the circuit timing behavior under variations, is not yet sufficiently investigated and employed by existing testing methodologies. This thesis work proposes Opt-KLPG and MIRID, which can be utilized by a statistical delay testing flow. Opt-KLPG—a K Longest Paths Generation (KLPG) algorithm for optimal solutions under memory constraints—can pin-pointedly generate tests for small delay defects, which are common small timing deviations under process variations, based on the traditional KLPG algorithm. In contrast to KLPG, Opt-KLPG guarantees the optimality of the solution (the K longest sensitizable paths indeed). MIRID is a mixed-mode timing-aware simulator, incorporating effects of power-supply noise and combining an event-driven logic simulation engine with interfaces to provided electrical models. MIRID aims at evaluating delay tests in presence of process variations efficiently yet accurately, by performing logic simulation at the gate level while determining the gate delays using simplified electrical modes. The electrical models applied by the simulator focus on the IR drop effect. Electrical parameters mainly contributing to the effect are incorporated into the model. The simulator is generic and flexible to be adapted by modifying the interfaces with minor effort. Both applications were verified in various aspects by experiments for academical/industrial circuits, and turned out to have satisfiable effectiveness and performance. 2016 urn:nbn:de:bvb:739-opus4-4229 Fakultät für Informatik und Mathematik OPUS4-66 Dissertation Stingl, Christian Realisierung einer Server based Computing-Lösung Die Dissertation setzt sich mit dem Thema Server based Computing (SbC) im Schulbereich auseinander. Neben den technischen Grundlagen der SbC-Technologie im Vergleich zu den anderen vorherrschenden Technologien (Peer to Peer, Client/Server) wird die Migration zur SbC-Technologie anhand zweier Fallstudien behandelt. Zudem wird für eine ganzheitliche Betrachtung des Themas die wirtschaftliche Vorteilhaftigkeit anhand eines Kosten-Nutzen-Vergleichs ermittelt. Darüber hinaus wird ein Dienstleistungskonzept für die Ausstattung und den Betrieb der schulischen IT-Infrastruktur auf der Basis einer Public-Private-Partnership entwickelt. Als Erweiterung dieses Konzepts wird ein Betreibermodell besprochen, das die Distribution der SbC-Technologie über das Internet durch einen Education Service Provider ermöglichen soll. 2006 urn:nbn:de:bvb:739-opus-779 Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät OPUS4-68 Dissertation Götzfried, Michael Service-orientierte Anwendungsintegration im Intra- und Internet Die Integration von Anwendungen unterschiedlicher Art stellt seit geraumer Zeit eine der großen Herausforderungen in der Unternehmens-IT dar. Die vorliegende Dissertation untersucht die Frage, inwiefern der Einsatz der service-orientierten Architektur als Software-Architektur vor dem Hintergrund der Entwicklungen auf den Gebieten der Web Services und dem Grid Computing einen Beitrag zur Erfüllung betriebswirtschaftlicher und technischer Anforderungen in verschiedenen Integrationsszenarien leisten kann. Dazu werden verschiedene Typen von IT-Sourcing-Szenarien herangezogen, die vom innerbetrieblichen Aufbau einer service-orientierten Architektur, über die Einbindung von Partnern und Serviceunternehmen im Rahmen von Outsourcing bis hin zum Provisioning von Anwendungen und Rechendiensten reichen. Es wird die Möglichkeit einer Entkopplung der betriebswirtschaftlichen Entscheidung von der technischen Realisierung aufgezeigt. Durch die steigende Komplexität der Szenarien und durch eine damit einhergehende abnehmende Eingriffsmöglichkeit seitens des Managements wird die Tragfähigkeit der Architektur getestet, um komplexe Einmalnutzungsszenarien im Sinne des Service Providing vor dem Hintergrund gestiegener technologischer Möglichkeiten zu beleuchten. Ausgewählte Aspekte werden prototypisch realisiert. 2006 urn:nbn:de:bvb:739-opus-790 Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät OPUS4-42 Dissertation Robschink, Torsten Pfadbedingungen in Abhängigkeitsgraphen und ihre Anwendung in der Softwaresicherheitstechnik Diese Arbeit präsentiert eine neue Methode zur Sicherheitsanalyse von Software im Bereich der Manipulationsprüfung und der Einhaltung von Informationsflüssen zwischen verschiedenen Sicherheitsniveaus. Program-Slicing und Constraint-Solving sind eigenständige Verfahren, die sowohl zur Abhängigkeitsbestimmung als auch zur Berechnung arithmetischer Eigenschaften verwendet werden. Die erstmalige Kombination dieser beiden Verfahren mittels Pfadbedingungen liefert nicht nur binäre Abhängigkeitsinformationen wie Slicing, sondern exakte notwendige Bedingungen über die Informationsflüsse zwischen zwei Programmpunkten. Neben der Definition der Grundlagen von Abhängigkeitsgraphen und einfachen Pfadbedingungen werden neue Erweiterungen für kontextsensitive interprozedurale Pfadbedingungen gezeigt und die Integration von domänenspezifischen Verfahren für Arrayfelder und abstrakten Datentypen demonstriert. Der Schwerpunkt der Arbeit liegt in der Realisierung von Pfadbedingungen für echte Programme in echten Programmiersprachen. Hierfür werden Verfahren vorgeschlagen, realisiert und empirisch untersucht, wie Pfadbedingungen für große Programme skalieren. Die zum Einsatz kommenden Techniken sind u.a. Intervallanalyse und Binäre Entscheidungsgraphen, mit denen die generelle exponentielle Komplexität von Pfadbedingungen beherrschbar wird. Fallstudien für den Einsatz von Pfadbedingungen und die empirische Untersuchung mehrerer Verfahren zur Intervallanalyse zeigen, dass Pfadbedingungen für die praktische Programmanalyse und das Programmverstehen geeignet und empfehlenswert sind. 2004 urn:nbn:de:bvb:739-opus-469 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-761 Dissertation Kronawitter, Stefan Automatic Performance Optimization of Stencil Codes A widely used class of codes are stencil codes. Their general structure is very simple: data points in a large grid are repeatedly recomputed from neighboring values. This predefined neighborhood is the so-called stencil. Despite their very simple structure, stencil codes are hard to optimize since only few computations are performed while a comparatively large number of values have to be accessed, i.e., stencil codes usually have a very low computational intensity. Moreover, the set of optimizations and their parameters also depend on the hardware on which the code is executed. To cut a long story short, current production compilers are not able to fully optimize this class of codes and optimizing each application by hand is not practical. As a remedy, we propose a set of optimizations and describe how they can be applied automatically by a code generator for the domain of stencil codes. A combination of a space and time tiling is able to increase the data locality, which significantly reduces the memory-bandwidth requirements: a standard three-dimensional 7-point Jacobi stencil can be accelerated by a factor of 3. This optimization can target basically any stencil code, while others are more specialized. E.g., support for arbitrary linear data layout transformations is especially beneficial for colored kernels, such as a Red-Black Gauss-Seidel smoother. On the one hand, an optimized data layout for such kernels reduces the bandwidth requirements while, on the other hand, it simplifies an explicit vectorization. Other noticeable optimizations described in detail are redundancy elimination techniques to eliminate common subexpressions both in a sequence of statements and across loop boundaries, arithmetic simplifications and normalizations, and the vectorization mentioned previously. In combination, these optimizations are able to increase the performance not only of the model problem given by Poisson's equation, but also of real-world applications: an optical flow simulation and the simulation of a non-isothermal and non-Newtonian fluid flow. 2019 xiii, 130 Seiten urn:nbn:de:bvb:739-opus4-7618 Fakultät für Informatik und Mathematik OPUS4-73 Dissertation Störzer, Maximilian Impact Analysis for AspectJ - A Critical Analysis and Tool-Based Approach to AOP Aspect-Oriented Programming (AOP) has been promoted as a solution for modularization problems known as the tyranny of the dominant decomposition in literature. However, when analyzing AOP languages it can be doubted that uncontrolled AOP is indeed a silver bullet. The contributions of the work presented in this thesis are twofold. First, we critically analyze AOP language constructs and their effects on program semantics to sensitize programmers and researchers to resulting problems. We further demonstrate that AOP—as available in AspectJ and similar languages—can easily result in less understandable, less evolvable, and thus error prone code—quite opposite to its claims. Second, we examine how tools relying on both static and dynamic program analysis can help to detect problematical usage of aspect-oriented constructs. We propose to use change impact analysis techniques to both automatically determine the impact of aspects and to deal with AOP system evolution. We further introduce an analysis technique to detect potential semantical issues related to undefined advice precedence. The thesis concludes with an overview of available open source AspectJ systems and an assessment of aspect-oriented programming considering both fundamentals of software engineering and the contents of this thesis. 2006 urn:nbn:de:bvb:739-opus-897 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-18 Dissertation Schreiber, Falk Visualisierung biochemischer Reaktionsnetze In dieser Arbeit werden Anforderungen an die Darstellung biochemischer Reaktionsnetze untersucht und die Netze unter dem Gesichtspunkt der Visualisierung modelliert. Anschliessend wird ein Algorithmus zum Zeichnen biochemischer Reaktionsnetze entwickelt und analysiert. 2001 urn:nbn:de:bvb:739-opus-215 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-26 Dissertation Hanning, Tobias Vektorielle Mehrniveaupassung. Anwendungen in der Bildsegmentierung Die Anwendungen der vektoriellen Mehrniveaupassung in der Bildsegmentierung stehen in unmittelbarer Nachbarschaft zum verbreiteten Ansatz, Bilder mit Methoden der Variationsrechnung über einen Energieterm zu segmentieren. Beiden Verfahren ist gemeinsam, daß sie versuchen, das gegebene Bild durch stückweise stetige Funktionen zu approximieren. Die maximalen Teilmengen des Definitionsbereichs, auf denen die approximierende Funktion stetig ist, bilden dann die Segmente. Im Gegensatz zu dem in der Literatur häufig unter dem Schlagwort Mumford-Shah-Modell bekannten Energieminimierungsverfahren ist der Raum der Funktionen, mit denen das Bild approximiert wird, bei der vektoriellen Mehrniveaupassung ein endlicher Vektorraum. Ein weiterer Unterschied zu diesem Ansatz ist das System der erlaubten Mengen. Es werden nur Segmentierungen erlaubt, deren Segmente aus diesem Mengensystem sind. Die durch diese Einschränkung schlankere Theorie führt zu einer gesicherten Existenz einer optimalen Lösung des Segmentierungsproblems, die der gängigen Vorstellung einer Segmentierung genügt. Die Berechnung lokaler Optima ist algorithmisch innerhalb der Theorie umsetzbar. 2002 urn:nbn:de:bvb:739-opus-295 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-20 Dissertation Fischer, Bernd Deduction-Based Software Component Retrieval Deduction-based software component retrieval is a software reuse technique that uses formal specifications as component descriptors and as search keys; matching components are identified using an automated theorem prover. This dissertation contains a detailed theoretical investigation of the concept as well as the first substantial experimental evaluation of its technical feasibility. 2001 urn:nbn:de:bvb:739-opus-231 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-24 Dissertation Braumandl, Reinhard Quality of Service and Optimization in Data Integration Systems This work presents techniques for the construction of a global data integrations system. Similar to distributed databases this system allows declarative queries in order to express user-specific information needs. Scalability towards global data integration systems and openness were major design goals for the architecture and techniques developed in this work. It is shown how service composition, extensibility and quality of service can be supported in an open system of providers for data, functionality for query processing operations, and computing power. 2002 urn:nbn:de:bvb:739-opus-279 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-43 Dissertation Urban, Christoph Das Referenzmodell PECS - Agentenbasierte Modellierung menschlichen Handelns, Entscheidens und Verhaltens Die Agententechnologie hat in den letzten Jahren eine rasante Entwicklung erfahren und mittlerweile großen Einfluss auf verschiedene Bereiche in Wissenschaft und Technik genommen. Insbesondere wurde die agentenbasierte Modellbildung und Simulation als wirkungsvolles Mittel für die Untersuchung realer oder hypothetischer Systeme erkannt. Im Vordergrund stehen hierbei ganz besonders Systeme, die durch menschliches Handeln, Entscheiden und Verhalten beeinflusst werden, oder der Mensch selbst, um seine Eigenschaften und Fähigkeiten aus den Blickwinkeln unterschiedlicher wissenschaftlicher Disziplinen heraus weiterführend zu erforschen. Das primäre Ziel der vorliegenden Arbeit besteht darin, den Entwurf agenten-basierter Simulationsmodelle, in denen menschliches Handeln, Entscheiden und Verhalten von ausschlaggebender Bedeutung sind, auf konzeptioneller Ebene zu unterstützen. Um dieses Ziel zu erreichen, wird das domänen- und theorieunabhängige Referenzmodell PECS vorgestellt, das Strukturierungsprinzipien aus der Informatik mit systemtheoretischen Ansätzen verbindet, um das Wechselspiel vielfältiger Einflussbereiche auf das menschliche Handeln im Rahmen einer integrativen und umfassenden Agentenarchitektur abzubilden. Auf Grundlage des Referenzmodells werden insgesamt vier charakteristische Fallstudien aus der Psychologie, der Sozialpsychologie, der Soziologie und der experimentellen Ökonomie entwickelt, um den Einsatz des Referenzmodells in der Praxis zu demonstrieren. 2004 urn:nbn:de:bvb:739-opus-471 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-50 Dissertation Schwarzfischer, Thomas Quality and Utility - On the Use of Time-Value Functions to Integrate Quality and Timeliness Flexible Aspects in a Dynamic Real-Time Scheduling Environment Scheduling methodologies for real-time applications have been of keen interest to diverse research communities for several decades. Depending on the application area, algorithms have been developed that are tailored to specific requirements with respect to both the individual components of which an application is made up and the computational platform on which it is to be executed. Many real-time scheduling algorithms base their decisions solely or partly on timing constraints expressed by deadlines which must be met even under worst-case conditions. The increasing complexity of computing hardware means that worst-case execution time analysis becomes increasingly pessimistic. Scheduling hard real-time computations according to their worst-case execution times (which is common practice) will thus result, on average, in an increasing amount of spare capacity. The main goal of flexible real-time scheduling is to exploit this otherwise wasted capacity. Flexible scheduling schemes have been proposed to increase the ability of a real-time system to adapt to changing requirements and nondeterminism in the application behaviour. These models can be categorised as those whose source of flexibility is the quality of computations and those which are flexible regarding their timing constraints. This work describes a novel model which allows to specify both flexible timing constraints and quality profiles for an application. Furthermore, it demonstrates the applicability of this specification method to real-world examples and suggests a set of feasible scheduling algorithms for the proposed problem class. 2004 urn:nbn:de:bvb:739-opus-619 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-53 Dissertation Streckenbach, Mirko KABA - a system for refactoring Java programs Refactoring is a well known technique to enhance various aspects of an object-oriented program. It has become very popular during recent years, as it allows to overcome deficits present in many programs. Doing refactoring by hand is almost impossible due to the size and complexity of modern software systems. Automated tools provide support for the application of refactorings, but do not give hints, which refactorings to apply and why. The Snelting/Tip analysis is a program analysis, which creates a refactoring proposal for a class hierarchy by analyzing how class members are used inside a program. KABA is an adaption and extension of the Snelting/Tip analysis for Java. It has been implemented and expanded to become a semantic preserving, interactive refactoring system. Case studies of real world programs will show the usefulness of the system and its practical value. 2005 urn:nbn:de:bvb:739-opus-638 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-201 Dissertation Wüchner, Patrick Energy-Efficient and Timely Event Reporting Using Wireless Sensor Networks This thesis investigates the suitability of state-of-the-art protocols for large-scale and long-term environmental event monitoring using wireless sensor networks based on the application scenario of early forest fire detection. By suitable combination of energy-efficient protocol mechanisms a novel communication protocol, referred to as cross-layer message-merging protocol (XLMMP), is developed. Qualitative and quantitative protocol analyses are carried out to confirm that XLMMP is particularly suitable for this application area. The quantitative analysis is mainly based on finite-source retrial queues with multiple unreliable servers. While this queueing model is widely applicable in various research areas even beyond communication networks, this thesis is the first to determine the distribution of the response time in this model. The model evaluation is mainly carried out using Markovian analysis and the method of phases. The obtained quantitative results show that XLMMP is a feasible basis to design scalable wireless sensor networks that (1) may comprise hundreds of thousands of tiny sensor nodes with reduced node complexity, (2) are suitable to monitor an area of tens of square kilometers, (3) achieve a lifetime of several years. The deduced quantifiable relationships between key network parameters — e.g., node size, node density, size of the monitored area, aspired lifetime, and the maximum end-to-end communication delay — enable application-specific optimization of the protocol. 2013 urn:nbn:de:bvb:739-opus-27159 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-116 Dissertation Größlinger, Armin The Challenges of Non-linear Parameters and Variables in Automatic Loop Parallelisation With the rise of manycore processors, parallelism is becoming a mainstream necessity. Unfortunately, parallel programming is inherently more difficult than sequential programming; therefore, techniques for automatic parallelisation will become indispensable. We aim at extending the well-known polyhedron model, which promises this automation, beyond some of its current restrictions. Up to now, loop bounds and array subscripts in the modelled codes must be expressions linear in both the variables and the parameters. We lift this restriction and allow certain polynomial expressions instead of linear ones. With our extensions, we are able to handle more programs in all phases of the parallelisation process (dependence analysis, transformation of the program model, code generation). We extend Banerjee's classical dependence analysis to handle one non-linear parameter p, i.e., we are able to determine precisely the solutions of the system of conflict equalities for input programs with non-linear array accesses like A[p*i] in dependence of the residue class of p. We make contributions to three transformations desirable in automatic parallelisation. First, we show that using a generalised Simplex algorithm, which we have developed, schedules with non-linear parameters like theta(i)=floor(i/n) can be computed. In addition, such schedules can be expressed easily as a quantifier elimination problem but this approach turns out to be computationally less efficient with the available implementation. As a second transformation, we study parametric tiling which is used to adapt a parallelised program to the number of available processors at run time. Third, we present a localisation technique to exploit scratchpad memories on architectures on which data caching has to be handled by software. We transform a given code such that it keeps values which are reused in successive iterations of a sequential loop in the scratchpad. An access to a value written in an earlier iteration is served from the scratchpad to accelerate the access. In general, this transformation introduces non-linear loop bounds in the transformed model. Finally, we present an algorithm for generating code for arbitrary semi-algebraic iteration sets, i.e., for iteration sets described by polynomial inequalities in the variables and parameters. This is a vast generalisation of existing polyhedral code generation techniques. Although our algorithm is less efficient than polyhedral code generators, this paves the way for a code generator that can handle arbitrary parametric tilings and other transformations which introduce non-linear parameters (like non-linear schedules and the localisation we present) or even non-linear variables. 2009 urn:nbn:de:bvb:739-opus-17893 Sonstiger Autor der Fakultät für Informatik und Mathematik OPUS4-200 Dissertation Seedorf, Jan Security for Decentralised Service Location - Exemplified with Real-Time Communication Session Establishment Decentralised Service Location, i.e. finding an application communication endpoint based on a Distributed Hash Table (DHT), is a fairly new concept. The precise security implications of this approach have not been studied in detail. More importantly, a detailed analysis regarding the applicability of existing security solutions to this concept has not been conducted. In many cases existing client-server approaches to security may not be feasible. In addition, to understand the necessity for such an analysis, it is key to acknowledge that Decentralised Service Location has some unique security requirements compared to other P2P applications such as filesharing or live streaming. This thesis concerns the security challenges for Decentralised Service Location. The goals of our work are on the one hand to precisely understand the security requirements and research challenges for Decentralised Service Location, and on the other hand to develop and evaluate corresponding security mechanisms. The thesis is organised as follows. First, fundamentals are explained and the scope of the thesis is defined. Decentralised Service Location is defined and P2PSIP is explained technically as a prototypical example. Then, a security analysis for P2PSIP is presented. Based on this security analysis, security requirements for Decentralised Service Location and the corresponding research challenges -- i.e. security concerns not suitably mitigated by existing solutions -- are derived. Second, several decentralised solutions are presented and evaluated to tackle the security challenges for Decentralised Service Location. We present decentralised algorithms to enable availability of the DHTs lookup service in the presence of adversary nodes. These algorithms are evaluated via simulation and compared to analytical bounds. Further, a cryptographic approach based on self-certifying identities is illustrated and discussed. This approach enables decentralised integrity protection of location-bindings. Finally, a decentralised approach to assess unknown identities is introduced. The approach is based on a Web-of-Trust model. It is evaluated via prototypical implementation. Finally, the thesis closes with a summary of the main contributions and a discussion of open issues. 2012 urn:nbn:de:bvb:739-opus-27147 Sonstiger Autor der Fakultät für Informatik und Mathematik OPUS4-241 Dissertation Peintner, Daniel Efficient Exchange and Processing of Semi-structured Data in the Embedded Domain The Internet is a global system of interconnected computers and computer networks where semi-structured data has been successfully applied for exchanging information. In nowadays Internet the huge range of actors, the large diversity of the associated device classes and domains, and the enormous amount of resource-restricted controllers in this system created new requirements and coined also a new term. Internet of Things (IoT), in this regard, refers to identifiable objects (things) and their virtual representations in an Internet-like structure. The fundamental question the thesis tries to answer is whether and how the same semi-structured data can be also applied to the IoT and the embedded domain in spite of resource-limited controllers. In order to discuss this question properties and requirements of embedded networks with regard to the IoT domain have been collected and evaluated. Thereafter the omnipresent semi-structured data exchange format in the Web, the Extensible Markup Language (XML), has been validated. The result was a list of missing requirements such as a compact representation, a representation that can be generated and consumed fast and also allows a small footprint implementation. To address the compiled requirements a binary representation of XML which nowadays is known as W3Cs Efficient XML Interchange (EXI) format has been accomplished which simultaneously optimizes performance and the utilization of computational resources and is designed to be compatible with XML. Moreover, in this work the format has been practically validated and tested. Addressing the needs of the embedded domain one result of this analyzes were optimizations to constrain runtime memory usage and to predict memory growth at runtime. A concept introduced in this thesis is LazyDOM which reduces memory requirements when processing and querying data. By means of a newly proposed code generation technique processing of EXI on ultra-constrained device classes has been enabled and resulting format modifications have been adopted by the W3C standardization. The research work described in this thesis on efficiently exchanging and processing semi-structured data on constrained embedded devices has not only triggered modifications in the W3C EXI format but even is already adopted in domain specific application standards and implementations. The above mentioned optimizations such as predictably limit the memory growth at runtime have been contributed, discussed and evaluated by the W3C experts and become a core part of the EXI specification. Even more significantly from the IoT perspective these optimizations provide the basis for the adoption of this technology in ISO and IEC standardization which is the first time for automotive and power industry to use IoT in the control plane. The implementation of EXI to conduct the evaluation as part of this thesis has become the de-facto open source reference implementation of EXI and became the basis of a number of other reference implementations such as the OpenV2G project that provides the reference implementation of the communication interface in ISO/IEC 15118. In summary the conducted research work has evaluated the options to adapt semi-structured data for the constrained embedded domain, proposed modifications and evaluated those under realistic conditions. This made it relevant for the technology as well as for application standardization despite the short period of this work. As such the research can now be taken as a basis for further challenges in the IoT field namely adopting concepts of the Semantic Web and adapting those to stimulate the quickly expanding eco-system of embedded devices. 2014 urn:nbn:de:bvb:739-opus-27610 10.15475/UPA1113 Sonstiger Autor der Fakultät für Informatik und Mathematik OPUS4-216 Dissertation Käbisch, Sebastian Resource Optimization of SOA-Technologies in Embedded Networks Embedded networks are fundamental infrastructures of many different kinds of domains, such as home or industrial automation, the automotive industry, and future smart grids. Yet they can be very heterogeneous, containing wired and wireless nodes with different kinds of resources and service capabilities, such as sensing, acting, and processing. Driven by new opportunities and business models, embedded networks will play an ever more important role in the future, interconnecting more and more devices, even from other network domains. Realizing applications for such types of networks, however, is a highly challenging task, since various aspects have to be considered, including communication between a diverse assortment of resource-constrained nodes, such as microcontrollers, as well as flexible node infrastructure. Service Oriented Architecture (SOA) with Web services would perfectly meet these unique characteristics of embedded networks and ease the development of applications. Standardized Web services, however, are based on plain-text XML, which is not suitable for microcontroller-based devices with their very limited resources due to XML's verbosity, its memory and bandwidth usage, as well as its associated significant processing overhead. This thesis presents methods and strategies for realizing efficient XML-based Web service communication in embedded networks by means of binary XML using EXI format. We present a code generation approach to create optimized and dedicated service applications in resource-constrained embedded networks. In so doing, we demonstrate how EXI grammar can be optimally constructed and applied to the Web service and service requester context. In addition, so as to realize an optimized service interaction in embedded networks, we design and develop an optimized filter-enabled service data dissemination that takes into account the individual resource capabilities of the nodes and the connection quality within embedded networks. We show different approaches for efficiently evaluating binary XML data and applying it to resource constrained devices, such as microcontrollers. Furthermore, we will present the effectful placement of binary XML filters in embedded networks with the aim of reducing both, the computational load of constrained nodes and the network traffic. Various evaluation results of V2G applications prove the efficiency of our approach as compared to existing solutions and they also prove the seamless and successful applicability of SOA-based technologies in the microcontroller-based environment. 2013 urn:nbn:de:bvb:739-opus-27338 Sonstiger Autor der Fakultät für Informatik und Mathematik OPUS4-52 Dissertation Ramsauer, Markus Energie- und qualitätsbewußte Einplanung von periodischen Prozessen in eingebetteten Echtzeitsystemen Mobile Geräte dienen immer häufiger zur Ausführung von Echtzeitanwendungen, sie bieten immer mehr Rechenleistung und sie werden kleiner und leichter. Hohe Rechenleistung erfordert jedoch sehr viel Energie, was im Gegensatz zu den geringen Akkukapazitäten, die aus der Forderung nach kleinen und leichten Geräten resultieren, steht. Bei der Echtzeiteinplanung von Rechenprozessen gewinnt daher der Energieverbrauch der Geräte neben der rechtzeitigen Beendigung von Anwendungen zunehmend an Bedeutung, weil sie möglichst lange unabhängig vom Stromnetz betrieben werden sollen. Andererseits werden auf diesen Geräten rechenintensive Anwendungen ausgeführt, bei denen es wünschenswert ist, die maximale mit der verfügbaren Rechenleistung erzielbare Qualität zu erhalten. In dieser Arbeit wird ein Systemmodell vorgestellt, das den Design-to-time-Ansatz mit den Möglichkeiten der dynamischen Leistungsanpassung (Rechenleistung und verbrauchte elektrische Leistung) moderner Prozessoren vereinigt. Der Design-to-time-Ansatz ermöglicht Energieeinsparungen oder Qualitätssteigerungen durch die dynamische Auswahl alternativer Implementierungen, welche dieselbe Aufgabe mit unterschiedlicher Ausführungsdauer und Qualität bzw. Energieverbrauch erfüllen. Das Systemmodell umfaßt unter anderem periodische Prozesse mit harten Echtzeitbedingungen, Datenabhängigkeiten und alternativen Implementierungen, sowie Prozessoren mit diskreten Leistungsstufen. Die Einplanung der Prozesse erfolgt in zwei Phasen. In der Offline-Phase wird ein flexibler Schedule berechnet, der für die zur Laufzeit möglichen Kombinationen von verstrichener Zeit und noch einzuplanender Prozeßmenge den jeweils einzuplanenden Prozeß, sowie die zu verwendende Implementierung und gegebenenfalls die einzustellende Leistungsstufe beinhaltet. Dieser flexible Schedule wird während der Online-Phase mit vernachlässigbarem Zeit- und Energieaufwand von einem Scheduler interpretiert. Für die Berechnung der optimalen flexiblen Schedules wurde ein Optimierer entwickelt, der eine Folge von flexiblen Schedules mit monoton steigender Güte (niedriger Energieverbrauch bzw. hohe Qualität) generiert, und damit der Klasse der Anytime-Algorithmen zuzuordnen ist. Eine Variante der Dynamischen Programmierung dient zur Bestimmung global optimaler, flexibler Schedules, die beispielsweise als Basis für Benchmarks dienen. Eine auf Simulated Annealing basierende Variante des Optimierers ermöglicht ein schnelleres Auffinden guter, flexibler Schedules für umfangreichere Anwendungen. 2005 urn:nbn:de:bvb:739-opus-628 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-220 Dissertation Vilsmaier, Christian Contextualized Access To Distributed And Heterogeneous Multimedia Data Sources Making multimedia data available online becomes less expensive and more convenient on a daily basis. This development promotes web phenomenons such as Facebook, Twitter, and Flickr. These phenomena and their increased acceptance in society in turn leads to a multiplication of the amount of available images online. This vast amount of, frequently public and therefore searchable, images already exceeds the zettabyte bound. Executing a similarity search on the magnitude of images that are publicly available in the Internet, and receiving a top quality result is a challenge that the scientific community has recently attempted to rise to. One approach to cope with this problem assumes the use of distributed heterogeneous Content Based Image Retrieval system (CBIRs). Following from this anticipation, the problems that emerge from a distributed query scenario must be dealt with. For example the involved CBIRs' usage of distinct metadata formats for describing their content, as well as their unequal technical and structural information. An addition issue is the individual metrics that are used by the CBIRs to calculate the similarity between pictures, as well as their specific way of being combined. Overall, receiving good results in this environment is a very labor intensive task which has been scientifically but not yet comprehensively explored. The problem primarily addressed in this work is the collection of pictures from CBIRs, that are similar to a given picture, as a response to a distributed multimedia query. The main contribution of this thesis is the construction of a network of Content Based Image Retrieval systems that are able to extract and exploit the information about an input image's semantic concept. This so called semantic CBIRn is mainly composed of CBIRs that are configured by the semantic CBIRn itself. Complementarily, there is a possibility that allows the integration of specialized external sources. The semantic CBIRn is able to collect and merge results of all of these attached CBIRs. In order to be able to integrate external sources that are willing to join the network, but are not willing to disclose their configuration, an algorithm was developed that approximates these configurations. By categorizing existing - as well as external - CBIRs and analyzing incoming queries, image queries are exclusively forwarded to the most suitable CBIRs. In this way, images that are not of any use for the user can be omitted beforehand. The hereafter returned images are rendered comparable in order to be able to merge them to one single result list of images, that are similar to the input image. The feasibility of the approach and the hereby obtained improvement of the search process is demonstrated by a prototypical implementation and its evaluation using classified images of ImageNet. Using this prototypical implementation an augmentation of the number of returned images that are of the same semantic concept as the input images is achieved by a factor of 4.75 with respect to a predefined non-semantic CBIRn. 2014 urn:nbn:de:bvb:739-opus-27374 Sonstiger Autor der Fakultät für Informatik und Mathematik OPUS4-13 Dissertation Mian Syed, Alexandra Engineering wissensbasierter Navigation und Steuerung autonom-mobiler Systeme Die autonome Steuerung mobiler, technischer Systeme in nicht exakt vorherbestimmbaren Situationen erfordert Methoden der autonomen Entscheidungsfindung, um ein planvolles, zielgerichtetes Agieren und Reagieren unter Echtzeitbedingungen realisieren zu können. Während mittels mathematischer Formeln Basisverhalten, wie beispielsweise in einer Geradeausbewegung, einer Drehung, bei einem Abbremsen, und in Gefahrenmomenten schnelle Reaktionen, berechnet werden, benötigt man auf der anderen Seite ein Regelsystem, um darüber hinaus "intelligentes", d.h. situationsangepaßtes Verhalten zu produzieren und gleichzeitig im Hinblick auf ein Missionsziel planvoll agieren zu können. Derartige Regelsysteme müssen sich auf einer abstrakten Ebene formulieren lassen, sollen sie vom Menschen problemlos entwickelbar, leicht modifizierbarund gut verifizierbar bleiben. Eine aufgrund ihres Konzeptes geeignete Programmierwelt ist die Logikprogrammierung. Ziel der Logikprogrammierung ist es weniger, Arbeitsabläufe zu beschreiben, als vielmehr Wissen in Form von Fakten zu spezifizieren und mit Hilfe von Regeln Schlußfolgerungen aus diesen Fakten ziehen zu können. Die klassische Logikprogrammierung ist jedoch aufgrund ihres Auswertungsmechanismus der SLD-Resolution (linear resolution with selected function for definite clauses) zu langsam für die Anwendung bei Echtzeitsystemen. Auch parallele Sprachformen, die ebenfalls mit SLD-Resolution arbeiten, erreichen beim Einsatz auf (von Neumann-) Mehrprozessorsystemen bislang nicht die notwendige Effizienz. Das Anwendungsgebiet der deduktiven Datenbanken hat im Vergleich dazu durch Bottom-Up Techniken einen anderen Auswertungsansatz mit deutlich höherer Effizienz geliefert. Viele dort auftretenden Probleme können jedoch nur durch die Integration anforderungsgetriebener Abarbeitung gelöst werden. Auf der anderen Seite stellen Datenflußrechnerarchitekturen aufgrund der automatisierten Ausbeutung feinkörniger Parallelität einen hervorragenden Ansatz der Parallelverarbeitung dar. Bei Datenflußrechnerarchitekturen handelt es sich um (Mehrprozessor-) Systeme, deren datengetriebener Abarbeitungsmechanismus sich grundlegend vom weit verbreiteten kontrollflußgesteuerten von Neumann-Prinzip unterscheidet.Überlegungen zur Struktur von Steuerungssystemen werden ergeben, daß es mittels Ansätzen aus dem Gebiet der deduktiven Datenbanken möglich ist, ein für diese Aufgabenstellung neuartiges, ausschließlich datengetriebenes Auswertungsmodell zu generieren. Dabei vermeidet es Probleme, die bei Bottom-Up Verfahren auftreten können, wie z.B. das Auftreten unendlicher Wertemengen und die späte Einschränkung auf relevante Werte, ohne gleichzeitig die Stratifizierung von Programmen zu gefährden. Ergebnis der Arbeit ist eine anwendungsbezogene, problemorientierte Entwicklungsumgebung, die einerseits die Entwicklung und Verifikation der Spezifikation mit existierenden Werkzeugen erlaubt und andererseits die effiziente, parallele Ausführung auf geeigneten Rechensystemen ermöglicht. Zusätzlich wird die Voraussetzung geschaffen, verschiedene weitere, für die Steuerung autonomer Systeme unverzichtbare Komponenten in das Abarbeitungsmodell zu integrieren. Simulationsergebnisse werden belegen, daß das vorgestellte Berechnungsmodell bezüglich realer Anwendungsbeispiele bereits in einer Monoprozessorversion Echtzeitbedingungen genügt. Damit ist die Voraussetzung für die Ausführung zukünftiger, weitaus komplexerer Steuerungsprobleme, ggf. auf Mehrprozessorsystemen, in Echtzeit geschaffen. 2001 urn:nbn:de:bvb:739-opus-159 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-10 Dissertation Bachl, Sabine Isomorphe Subgraphen und deren Anwendung beim Zeichnen von Graphen In der Arbeit wird der Begriff der Isomorphen Subgraphen definiert. Anschließend werden theoretische und praktische Ergebnisse bei der Erkennung Isomorpher Graphen erörtert. 2001 urn:nbn:de:bvb:739-opus-149 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-2 Dissertation Börncke, Frank Modellierung syntaktischer Strukturen natürlicher Sprachen mit Graphgrammatiken Die vorliegende Arbeit erschließt durch die Formalisierung einer linguistischen Theorie Möglichkeiten zum Entwurf generischer Verfahren zur Verarbeitung natürlicher Sprachen. Zu diesem Zweck setzen wir Graphsprachen für die Modellierung syntaktischer Strukturen ein. Damit lassen sich Ergebnisse der linguistischen Forschung mit Begriffen der Graphentheorie beschreiben und bewerten. Zu diesem Ansatz motiviert der Umstand, daß in der Linguistik im Rahmen der Syntax jedem Satz einer natürlichen Sprache eine nichtsequentielle Struktur zugesprochen wird. Diese Struktur überlagert die lineare Wortfolge, die wir als Satz kennen. Eine Menge solcher syntaktischen Strukturen die wir mit Graphen modellieren können betrachten wir als Graphsprache. Die Arbeit zeigt, wie sich solche Graphsprachen mit Hilfe von Graphgrammatiken beschreiben lassen. Wie alle formalen Sprachen zeichnen sich auch Graphgrammatiken dadurch aus, daß sie mathematisch wohldefniert sind. Dies stellt eine notwendige Voraussetzung dar, um Aussagen über eine Sprache zu beweisen. Von Interesse ist dabei vor allem die Untersuchung unendlicher Mengen. Das Ziel besteht dann darin, für sie eine endliche Beschreibung zu finden. Diese Aufgabe wird in der Regel von einer Grammatik erfüllt. Darüber hinaus ist man an erkennenden Algorithmen für Sprachen interessiert, die das Wortproblem effizient lösen. Bezüglich natürlicher Sprachen werden beide Aufgabenstellungen in dieser Arbeit mit Hilfe von Graphgrammatiken gelöst. 2000 urn:nbn:de:bvb:739-opus-28 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-4 Dissertation Dolzmann, Andreas Algorithmic strategies for applicable real quantifier elimination One of the most important algorithms for real quantifier elimination is the quantifier elimination by virtual substitution introduced by Weispfenning in 1988. In this thesis we present numerous algorithmic approaches for optimizing this quantifier elimination algorithm. Optimization goals are the actual running time of the implementation of the algorithm and the size of the output formula. Strategies for obtaining these goals include simplification of first-order formulas,reduction of the size of the computed elimination set, and condensing a new replacement for the virtual substitution. Local quantifier elimination computes formulas that are equivalent to the input formula only nearby a given point. We can make use of this restriction for further optimizing the quantifier elimination by virtual substitution. Finally we discuss how to solve a large class of scheduling problems by real quantifier elimination. To optimize our algorithm for solving scheduling problems we make use of the special form of the input formula and of additional information given by the description of the scheduling problem 2000 urn:nbn:de:bvb:739-opus-64 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-101 Dissertation Weitl, Franz Document Verification with Temporal Description Logics The thesis proposes a new formal framework for checking the content of web documents along individual reading paths. It is vital for the readability of web documents that their content is consistent and coherent along the possible browsing paths through the document. Manually ensuring the coherence of content along the possibly huge number of different browsing paths in a web document is time-consuming and error-prone. Existing methods for document validation and verification are not sufficiently expressive and efficient. The innovative core idea of this thesis is to combine the temporal logic CTL and description logic ALC for the representation of consistency criteria. The resulting new temporal description logics ALCCTL can - in contrast to existing specification formalisms - compactly represent coherence criteria on documents. Verification of web documents is modelled as a model checking problem of ALCCTL. The decidability and polynomial complexity of the ALCCTL model checking problem is proven and a sound, complete, and optimal model checking algorithm is presented. Case studies on real and realistic web documents demonstrate the performance and adequacy of the proposed methods. Existing methods such as symbolic model checking or XML-based document validation are outperformed in both expressiveness and speed. 2007 urn:nbn:de:bvb:739-opus-12528 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-97 Dissertation Schwaiger, Petra Ein Bedingungsmodell für Planungsprobleme in strukturierten Domänen Computerunterstützte Beratungssysteme finden sowohl in der Industrie als auch im akademischen Bereich eine zunehmende Bedeutung. Die Anforderungen an solche Systeme sind hinsichtlich der abbildbaren Strukturen, der Flexibilität der Anfragen und der Vollständigkeit und Korrektheit der Antworten hoch. Dies gilt insbesondere für Planungsprobleme in strukturierten Domänen. Derartige Probleme treten beispielsweise bei der Erstellung von Tests auf der Grundlage einer Menge von Fragen und gewissen Anforderungen an den Test, bei der Konsistenzprüfung von Studienordnungen und bei der computerunterstützten Studienberatung auf. In der vorliegenden Arbeit wird ein Framework zur Behandlung eben genannter Probleme präsentiert. Die vorgestellte Lösung bietet durch den modellbasierten Ansatz und die entwickelte anwendungsnahe Modellierungssprache - gerade auch im Vergleich zu existierenden Ansätzen - einen sehr hohen Grad an Abstraktion, Allgemeingültigkeit, Ausdrucksstärke, Flexibilität und Integrierbarkeit. Im Rahmen des entwickelten Modells wird eine geeignete Verzahnung von strukturellen und constraintbasierten Aspekten erreicht. Der hierbei in Syntax und Semantik definierte Constraintbegriff kann darüber hinaus als Formalisierung und Verallgemeinerung von Pfadconstraints bzw. Pfadanfragen in hierarchischen Datenmodellen aufgefasst werden. Für die interne Repräsentation erweist sich ein logikbasierter Ansatz mit Constraints, nämlich Answer Set Programming mit Gewichten, als eine ausgezeichnete Methode bezüglich der Ausdrucksstärke, Mächtigkeit und Adäquatheit. Die Praxistauglichkeit des verfolgten Ansatzes im Hinblick auf Performanz und Skalierbarkeit wird in verschiedenen realen Anwendungsfällen demonstriert. 2008 urn:nbn:de:bvb:739-opus-12497 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-38 Dissertation Krinke, Jens Advanced Slicing of Sequential and Concurrent Programs Program slicing is a technique to identify statements that may influence the computations in other statements. Despite the ongoing research of almost 25 years, program slicing still has problems that prevent a widespread use: Sometimes, slices are too big to understand and too expensive and complicated to be computed for real-life programs. This thesis presents solutions to these problems: It contains various approaches which help the user to understand a slice more easily by making it more focused on the user's problem. All of these approaches have been implemented in the VALSOFT system and thorough evaluations of the proposed algorithms are presented. The underlying data structures used for slicing are program dependence graphs. They can also be used for different purposes: A new approach to clone detection based on identifying similar subgraphs in program dependence graphs is presented; it is able to detect modified clones better than other tools. In the theoretical part, this thesis presents a high-precision approach to slice concurrent procedural programs despite that optimal slicing is known to be undecidable. It is the first approach to slice concurrent programs that does not rely on inlining of called procedures. 2003 urn:nbn:de:bvb:739-opus-375 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-214 Dissertation Stegmaier, Florian Unified Retrieval in Distributed and Heterogeneous Multimedia Information Systems Multimedia retrieval is an essential part of today's world. This situation is observable in industrial domains, e.g., medical imaging, as well as in the private sector, visible by activities in manifold Social Media platforms. This trend led to the creation of a huge environment of multimedia information retrieval services offering multimedia resources for almost any user requests. Indeed, the encompassed data is in general retrievable by (proprietary) APIs and query languages, but unfortunately a unified access is not given due to arising interoperability issues between those services. In this regard, this thesis focuses on two application scenarios, namely a medical retrieval system supporting a radiologist's workflow, as well as an interoperable image retrieval service interconnecting diverse data silos. The scientific contribution of this dissertation is split in three different parts: the first part of this thesis improves the metadata interoperability issue. Here, major contributions to a community-driven, international standardization have been proposed leading to the specification of an API and ontology to enable a unified annotation and retrieval of media resources. The second part issues a metasearch engine especially designed for unified retrieval in distributed and heterogeneous multimedia retrieval environments. This metasearch engine is capable of being operated in a federated as well as autonomous manner inside the aforementioned application scenarios. The remaining third part ensures an efficient retrieval due to the integration of optimization techniques for multimedia retrieval in the overall query execution process of the metasearch engine. 2014 urn:nbn:de:bvb:739-opus-27317 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-159 Dissertation Kunze, Kai Compensating for On-Body Placement Effects in Activity Recognition This thesis investigates, how placement variations of electronic devices influence the possibility of using sensors integrated in those devices for context recognition. The vast majority of context recognition research assumes well defined, fixed sen- sor locations. Although this might be acceptable for some application domains (e.g. in an industrial setting), users, in general, will have a hard time coping with these limitations. If one needs to remember to carry dedicated sensors and to adjust their orientation from time to time, the activity recognition system is more distracting than helpful. How can we deal with device location and orientation changes to make context sensing mainstream? This thesis presents a systematic evaluation of device placement effects in context recognition. We first deal with detecting if a device is carried on the body or placed somewhere in the environ- ment. If the device is placed on the body, it is useful to know on which body part. We also address how to deal with sensors changing their position and their orientation during use. For each of these topics some highlights are given in the following. Regarding environmental placement, we introduce an active sampling ap- proach to infer symbolic object location. This approach requires only simple sensors (acceleration, sound) and no infrastructure setup. The method works for specific placements such as "on the couch", "in the desk drawer" as well as for general location classes, such as "closed wood compartment" or "open iron sur- face". In the experimental evaluation we reach a recognition accuracy of 90% and above over a total of over 1200 measurements from 35 specific locations (taken from 3 different rooms) and 12 abstract location classes. To derive the coarse device placement on the body, we present a method solely based on rotation and acceleration signals from the device. It works independent of the device orientation. The on-body placement recognition rate is around 80% over 4 min. of unconstrained motion data for the worst scenario and up to 90% over a 2 min. interval for the best scenario. We use over 30 hours of motion data for the analysis. Two special issues of device placement are orientation and displacement. This thesis proposes a set of heuristics that significantly increase the robustness of motion sensor-based activity recognition with respect to sen- sor displacement. We show how, within certain limits and with modest quality degradation, motion sensor-based activity recognition can be implemented in a displacement tolerant way. We evaluate our heuristics first on a set of synthetic lower arm motions which are well suited to illustrate the strengths and limits of our approach, then on an extended modes of locomotion problem (sensors on the upper leg) and finally on a set of exercises performed on various gym machines (sensors placed on the lower arm). In this example our heuristic raises the dis- placed recognition rate from 24% for a displaced accelerometer, which had 96% recognition when not displaced, to 82%. 2011 urn:nbn:de:bvb:739-opus-26114 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-100 Dissertation Faber, Peter Code Optimization in the Polyhedron Model - Improving the Efficiency of Parallel Loop Nests A safe basis for automatic loop parallelization is the polyhedron model which represents the iteration domain of a loop nest as a polyhedron in $\mathbb{Z}^n$. However, turning the parallel loop program in the model to efficient code meets with several obstacles, due to which performance may deteriorate seriously -- especially on distributed memory architectures. We introduce a fine-grained model of the computation performed and show how this model can be applied to create efficient code. 2007 urn:nbn:de:bvb:739-opus-12512 Sonstiger Autor der Fakultät für Informatik und Mathematik OPUS4-243 Dissertation Schönberg, Christian Semantic Processing of Digital Documents Precise, content-rich and well-structured document models are required for applications like verifying the consistency of documents. Creating such models for common documents is currently an expensive and error-prone process. In this thesis we present a novel approach to modelling and processing digital documents that uses semantic technologies. In contrast to other modelling approaches, we model the structure of documents as indicated by the content, not as defined by technical attributes like the file format. Additionally, our meta-model can be applied to a wide range of different documents, not just to a small set of documents with a predefined set of features. The models include semantic data and content relationships, which can be further extended with domain knowledge. Our new separation of technical and semantic document models fuels a standardised method for obtaining semantic models. This method is effective, suitable for live processing, and easily transferable to other document types and other domains. As it is makes extensive use of background knowledge, we also present techniques for obtaining such knowledge, and for representing complex forms of knowledge with multiple meta-layers. A flexible technique for obtaining relevant data from our document models completes the approach. This includes the ability to obtain various verification models, suitable for different types of consistency criteria and for different validation formalisms. We conclude this thesis with an evaluation that shows the viability and effectiveness of the proposed approach. We present runtime results for an implementation based on RDF/OWL and the rule language JBoss Drools that are adequate for live processing. We also provide and successfully apply techniques for measuring the quality of both document models and background knowledge. 2013 urn:nbn:de:bvb:739-opus-27635 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-57 Dissertation Maydl, Walter Komponentenbasierte Softwareentwicklung für datenflußorientierte eingebettete Systeme Diese Dissertation beschäftigt sich mit den Problemen bei der Entwicklung von effizienter und zuverlässiger Software für eingebettete Systeme. Eingebettete Systeme sind inhärent nebenläufig, was mit einen Grund für ihre hohe Entwurfskomplexität darstellt. Aus dieser Nebenläufigkeit resultiert ein hoher Grad an Kommunikation zwischen den einzelnen Komponenten. Eine wichtige Forderung zur Vereinfachung des Entwurfsprozesses besteht in der getrennten Modellierung von Kommunikationsprotokollen und eigentlichen Verarbeitungsalgorithmen. Daraus resultiert eine höhere Wiederverwendbarkeit bei sich ändernden Kommunikationsstrukturen. Die Grundlage für die sogenannten Datenflußsprachen bildet eine einfache von Gilles Kahn konzipierte Sprache für Parallelverarbeitung. In dieser Sprache besteht ein System aus einer Menge sequentieller Prozesse (Komponenten), die über Fifokanäle miteinander kommunizieren. Ein Prozess ist rechenbereit, wenn seine Eingangsfifos mit entsprechenden Daten gefüllt sind. Übertragen werden physikalische Signale, die als Ströme bezeichnet werden. Ströme sind Folgen von Werten ohne explizite Zeitangaben. Das Einsatzgebiet von Datenflußsprachen liegt in der Entwicklung von Programmen zur Bild- und Signalverarbeitung, typischen Aufgaben in eingebetteten Systemen. Die Programmierung erfolgt visuell, wobei man Icons als Repräsentanten parametrisierbarer Komponenten aus einer Bibliothek auswählt und mittels Kanten (Fifos) verbindet. Ein im allgemeinen dynamischer Scheduler überwacht die Ausführung des fertiggestellten Anwendungsprogramms. Diese Arbeit schlägt ein universelleres Modell physikalischer Signale vor. Dabei werden zwei Ziele verfolgt: 1. Effiziente Kommunikation zwischen den Komponenten 2. Entwurfsbegleitende Überprüfung von Programmeigenschaften unter Verwendung komplexerer Komponentenmodelle Zur Effizienzsteigerung werden nur relevante Werte innerhalb von Strömen übertragen. Dies erhöht zwar den Mehraufwand zur Kennzeichnung des Aufbaus eines Teilstroms, in praktischen Anwendungen ist die hier vorgestellte Methode jedoch effizienter. Die Einführung neuer Signalmerkmale erlaubt unterschiedlichste Überprüfungen der Einhaltung von Typregeln durch die Eingangs- und Ausgangsströme einer Komponente. Anstelle einfacher Schaltregeln werden aufwendigere Kommunikationsprotokolle für die verschiedenen Arten von Komponenten eingeführt. Fifomaten (Fifo-Automaten) dienen als formale Grundlage. Mittels eines dezidierten Model-Checking-Verfahrens wird das Zusammenspiel der Fifomaten daraufhin untersucht, ob ein zyklischer Schedule existiert. Die Existenz eines solchen zyklischen Schedules schließt Speicherüberlauf und Deadlocks aus und garantiert darüber hinaus, daß das Programm nach endlicher Zeit wieder in die Ausgangssituation zurückfindet. Da im allgemeinen die Datenflußprogramme turingäquivalent sind, kann es allerdings zyklische Schedules geben, die das Verfahren nicht entdeckt. Mit der hier vorgestellten und implementierten Methode wird die Entwicklungszeit korrekter Datenflußprogramme deutlich reduziert. Das neue Modell physikalischer Signale macht zudem die Ausführung effizienter. 2005 urn:nbn:de:bvb:739-opus-681 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-19 Dissertation Zukowski, Ulrich Flexible Computation of the Well-Founded Semantics of Normal Logic Programs The well-founded semantics has been accepted as the most relevant semantics for logic-based information systems. In this dissertation a framework based on a set of program transformations is presented that generalizes all major computation approaches for the well-founded semantics using a common data structure and provides a common language to describe their evaluation strategy. This rewriting system gives the formal background to analyze and combine different evaluation strategies in a common framework, or to design new algorithms and prove the correctness of its implementations at a high level just by changing the order of program transformations. 2001 urn:nbn:de:bvb:739-opus-226 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-34 Dissertation Bachmaier, Christian Circle Planarity of Level Graphs In this dissertation we generalise the notion of level planar graphs in two directions: track planarity and radial planarity. Our main results are linear time algorithms both for the planarity test and for the computation of an embedding, and thus a drawing. Our algorithms use and generalise PQ-trees, which are a data structure for efficient planarity tests. 2004 urn:nbn:de:bvb:739-opus-385 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-122 Dissertation Brunner, Wolfgang Cyclic Level Drawings of Directed Graphs The Sugiyama framework proposed in the seminal paper of 1981 is one of the most important algorithms in graph drawing and is widely used for visualizing directed graphs. In its common version, it draws graphs hierarchically and, hence, maps the topological direction to a geometric direction. However, such a hierarchical layout is not possible if the graph contains cycles, which have to be destroyed in a preceding step. In certain application and problem settings, e.g., bio sciences or periodic scheduling problems, it is important that the cyclic structure of the input graph is preserved and clearly visible in drawings. Sugiyama et al. also suggested apart from the nowadays standard horizontal algorithm a cyclic version they called recurrent hierarchies. However, this cyclic drawing style has not received much attention since. In this thesis we consider such cyclic drawings and investigate the Sugiyama framework for this new scenario. As our goal is to visualize cycles directly, the first phase of the Sugiyama framework, which is concerned with removing such cycles, can be neglected. The cyclic structure of the graph leads to new problems in the remaining phases, however, for which solutions are proposed in this thesis. The aim is a complete adaption of the Sugiyama framework for cyclic drawings. To complement our adaption of the Sugiyama framework, we also treat the problem of cyclic level planarity and present a linear time cyclic level planarity testing and embedding algorithm for strongly connected graphs. 2010 urn:nbn:de:bvb:739-opus-17962 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-921 Dissertation Salehi Rizi, Fatemeh Graph Representation Learning for Social Networks Online social networks provide a rich source of information about millions of users worldwide. However, due to sparsity and complex structure, analyzing these networks is quite challenging and expensive. Recently, graph embedding emerged to map networked data into low-dimensional representations, i.e. vector embeddings. These representations are fed into off-the-shelf machine learning algorithms to simplify and speed up graph analytic tasks. Given the immense importance of social network analysis, in this thesis, we aim to study graph embedding for social networks in three directions. Firstly, we focus on social networks at microscopic level to primarily encode the structural characteristic of users' personal networks so-called ego networks. These representations are utilized in evaluation tasks whose performance depends on relational information from direct neighbors. For example, social circle prediction and event attendance inference both need structural information from neighbors in social networks. Secondly, we explore assessing the content of vector embeddings in terms of topological properties. This could be explained via two proposed approaches: 1) a learning to rank algorithm in which the model weights reveal the importance of properties at subgraph level (ego networks), 2) a regression model for direct approximation of network statistical properties at vertex level. Thirdly, we propose extensions of graph embedding to capture sign or additional content of social networks. Users in social media often express their feelings and attitudes towards others which forms sentiment links besides social links. We design a joint objective function whose terms capture semantics of both social and sentiment links simultaneously. We also propose a multi-task learning framework for networks with attributes and labels by stacking autoencoders. The weights of the learning tasks are automatically assigned via an adaptive loss weighting layer. 2021 ix, 130 Seiten urn:nbn:de:bvb:739-opus4-9211 Fakultät für Informatik und Mathematik OPUS4-443 Dissertation Klaghstan, Merza Multimedia data dissemination in opportunistic systems Opportunistic networks (OppNets) are human-centric mobile ad-hoc networks, in which neither the topology nor the participating nodes are known in advance. Routing is dynamically planned following the store-carry-and-forward paradigm, which takes advantage of people mobility. This widens the range of communication and supports indirect end-to-end data delivery. But due to individuals' mobility, OppNets are characterized by frequent communication disruptions and uncertain data delivery. Hence, these networks are mostly used for exchanging small messages like disaster alarms or traffic notifications. Other scenarios that require the exchange of larger data (e.g. video) are still challenging due to the characteristics of this kind of networks. However, there are still multimedia sharing scenarios where a user might need switching from infrastructural communications to an ad-hoc alternative. Examples are the cases of 1) absence of infrastructural networks in far rural areas, 2) high costs due to roaming or limited data volumes or 3) undesirable censorship by third parties while exchanging sensitive content. Consequently, we target in this thesis a video dissemination scheme in OppNets. For the video delivery problem in the sparse opportunistic networks, we propose a solution with the objective of reducing the video playout delay, so that enabling the recipient to play the video content as soon as possible even if at a low quality. Furthermore, the received video reaches later a higher quality level, ensuring a better viewing experience. The proposed solution encloses three contributions. The first one is given by granulating the videos at the source node into smaller parts, and associating them with unequal redundancy degrees. This is technically based on using the Scalable Video Coding (SVC), which encodes a video into several layers of unequal importance for viewing the content at different quality levels. Layers are routed using the Spray-and-Wait routing protocol, with different redundancy factors for the different layers depending on their importance degree. In this context as well, a video viewing QoE metric is proposed, which takes the values of the perceived video quality, delivery delay and network overhead into consideration, and on a scalable basis. Second, we take advantage of the small units of the Network Abstraction Layer (NAL), which compose SVC layers. NAL units are packetized together under specific size constraints to optimize granularity. Packets sizes are tuned in an adaptive way, with regard to the dynamic network conditions. Each node is enabled to record a history of environmental information regarding the contacts and forwarding opportunities, and use this history to predict future opportunities and optimize the sizes accordingly. Lastly, the receiver (destination) node is pushed into action by reacting to missing data parts in a composite ``backward'' loss concealment mechanism. So, the receiver asks first for the missing data from other nodes in the network in the form of request-response. Then, since the transmission is concerned with video content, video frame loss error concealment techniques are also exploited at the receiver side. Consequently, we propose to combine the two techniques in the loss concealment mechanism, which is enabled then to react to missing data parts. To study the feasibility and the applicability of the proposed solutions, simulation-driven experiments are performed, and statistical results are collected and analyzed. Consequently, we have got promising results that show the applicability of video dissemination in opportunistic delay tolerant networks, and open the door for a range of possible future works. 2016 160 S. urn:nbn:de:bvb:739-opus4-4438 Fakultät für Informatik und Mathematik OPUS4-505 Dissertation He, Xiaobing Threat Assessment for Multistage Cyber Attacks in Smart Grid Communication Networks In smart grids, managing and controlling power operations are supported by information and communication technology (ICT) and supervisory control and data acquisition (SCADA) systems. The increasing adoption of new ICT assets in smart grids is making smart grids vulnerable to cyber threats, as well as raising numerous concerns about the adequacy of current security approaches. As a single act of penetration is often not sufficient for an attacker to achieve his/her goal, multistage cyber attacks may occur. Due to the interdependence between the power grid and the communication network, a multistage cyber attack not only affects the cyber system but impacts the physical system. This thesis investigates an application-oriented stochastic game-theoretic cyber threat assessment framework, which is strongly related to the information security risk management process as standardized in ISO/IEC 27005. The proposed cyber threat assessment framework seeks to address the specific challenges (e.g., dynamic changing attack scenarios and understanding cascading effects) when performing threat assessments for multistage cyber attacks in smart grid communication networks. The thesis looks at the stochastic and dynamic nature of multistage cyber attacks in smart grid use cases and develops a stochastic game-theoretic model to capture the interactions of the attacker and the defender in multistage attack scenarios. To provide a flexible and practical payoff formulation for the designed stochastic game-theoretic model, this thesis presents a mathematical analysis of cascading failure propagation (including both interdependency cascading failure propagation and node overloading cascading failure propagation) in smart grids. In addition, the thesis quantifies the characterizations of disruptive effects of cyber attacks on physical power grids. Furthermore, this thesis discusses, in detail, the ingredients of the developed stochastic game-theoretic model and presents the implementation steps of the investigated stochastic game-theoretic cyber threat assessment framework. An application of the proposed cyber threat assessment framework for evaluating a demonstrated multistage cyber attack scenario in smart grids is shown. The cyber threat assessment framework can be integrated into an existing risk management process, such as ISO 27000, or applied as a standalone threat assessment process in smart grid use cases. 2017 xv, 182 Seiten urn:nbn:de:bvb:739-opus4-5051 Fakultät für Informatik und Mathematik OPUS4-930 Dissertation Alyousef, Ammar E-Mobility Management: Towards a Grid-friendly Smart Charging Solution Replacing fossil-fueled vehicles with Electric Vehicles (EVs) poses new challenges for power distribution networks. Specifically speaking, the electrification of the mobility sector relies on the ability to process and analyze information on when, where, for how long, or how fast charging processes will take place. Nevertheless, such kind of information is typically difficult to acquire or insufficiently predictable due to the dynamic nature of the system. Also, the increasing adoption rate of the renewable energy sources, specifically the domestic Photovoltaic (PV) systems, and the potentially associated grid defection scenarios will significantly impact the cost and efforts required to operate the grid in terms of power quality and demand-supply aspects. However, such emerging requirements have arguably not been taken into account when the distribution grid was built originally. Besides, expanding the distribution and transmission capacity is a very costly and lengthy process. Therefore, any proposed solution should be cost-effective as well as environment-, grid- and user-friendly. To this end, the advancements in Information and Communications Technology (ICT) are increasingly adopted and applied. This thesis addresses the rapidly growing EV sector and deals with the problems to overcome potential power quality degradation caused by the challenges mentioned above. Since time switch and radio ripple control as existing solutions in Germany are costly and neither very effective nor scalable as it requires hardware retrofitting of existing public Charging Stations (CSs), the primary focus of this work is the development of an appropriate, standards-based, scalable, and smart charging solution of EVs. Such a solution can, in turn, boost the usage of renewable energy by ensuring that the existing grid infrastructure can operate within its permissible limits while maintaining acceptable levels of power quality. This work introduces a new definition of the concept, "grid-friendly EV charging", where the power demand of a CS is adjusted depending on the real-time status of a power grid. In this regard, the conflicting concerns of stakeholders in an EV ecosystem are considered. For example, a Distribution System Operator (DSO) does not want to reveal a lot of technical details about the power grid or its status. Similarly, a Charging Service Provider (CSP) wants to keep its clients happy without sharing the details of its business model with others, namely, DSOs. For that sake, a distributed smart charging architecture is proposed in this thesis. It is event-driven and responds in nearly real-time to unforeseen and critical grid situations such as high/low voltage, congestion, phase unbalance, and harmonics. In that regard, the publish/subscribe messaging pattern, used as a part of the architecture, enables an efficient and well-performing communication scheme among the different components. Moreover, an indication mechanism about the different issues in a power grid is developed; it adopts the traffic light model. It works as a black box to separate smart controllers for each CS and configured only by the CSP. Smart chargers enable a smooth adjustment of the charging power to avoid drastic changes in the grid state. To that end, two types of intelligent controllers are developed and tested. While the first controller is inspired by the fuzzy logic, the second one is inspired by the slow-start mechanism used in TCP to control congestion in computer networks. A simulative approach is applied to evaluate the solution, thereby, a topology of a real low voltage grid with realistic load and generation profiles is used. Furthermore, a set of metrics is defined regarding the main concerns of stakeholders: voltage, overloading, fairness, the satisfaction of EV users and grid operator, as well as the grid-friendly behavior of a CS/ EV user. The evaluation shows that the solution is able to guarantee a safe operation of the grid. The proposed system can ensure a grid-friendly charging by sacrificing of a small portion of user satisfaction, that sacrifice of a user is awarded via a points-based reward system. Last but not least, the proposed distributed controllers are compared to two other controllers: (1) a decentralized controller based only on sensing the local voltage and (2) a very strict centralized controller focusing on grid-friendliness. The latter ensures proportional fairness among users regarding the objective function of the optimization problem solved in each simulation step. The distributed controllers are superior to the decentralized controller in terms of grid friendly and fairness and converge in general to the centralized one. 2021 xvii, 159 Seiten urn:nbn:de:bvb:739-opus4-9302 Fakultät für Informatik und Mathematik OPUS4-504 Dissertation Zwicklbauer, Stefan Robust Entity Linking in Heterogeneous Domains Entity Linking is the task of mapping terms in arbitrary documents to entities in a knowledge base by identifying the correct semantic meaning. It is applied in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question and Answering. Most existing Entity Linking systems were optimized for specific domains (e.g., general domain, biomedical domain), knowledge base types (e.g., DBpedia, Wikipedia), or document structures (e.g., tables) and types (e.g., news articles, tweets). This led to very specialized systems that lack robustness and are only applicable for very specific tasks. In this regard, this work focuses on the research and development of a robust Entity Linking system in terms of domains, knowledge base types, and document structures and types. To create a robust Entity Linking system, we first analyze the following three crucial components of an Entity Linking algorithm in terms of robustness criteria: (i) the underlying knowledge base, (ii) the entity relatedness measure, and (iii) the textual context matching technique. Based on the analyzed components, our scientific contributions are three-fold. First, we show that a federated approach leveraging knowledge from various knowledge base types can significantly improve robustness in Entity Linking systems. Second, we propose a new state-of-the-art, robust entity relatedness measure for topical coherence computation based on semantic entity embeddings. Third, we present the neural-network-based approach Doc2Vec as a textual context matching technique for robust Entity Linking. Based on our previous findings and outcomes, our main contribution in this work is DoSeR (Disambiguation of Semantic Resources). DoSeR is a robust, knowledge-base-agnostic Entity Linking framework that extracts relevant entity information from multiple knowledge bases in a fully automatic way. The integrated algorithm represents a collective, graph-based approach that utilizes semantic entity and document embeddings for entity relatedness and textual context matching computation. Our evaluation shows, that DoSeR achieves state-of-the-art results over a wide range of different document structures (e.g., tables), document types (e.g., news documents) and domains (e.g., general domain, biomedical domain). In this context, DoSeR outperforms all other (publicly available) Entity Linking algorithms on most data sets. 2017 iv, 191 Seiten urn:nbn:de:bvb:739-opus4-5047 Fakultät für Informatik und Mathematik OPUS4-197 Dissertation Mayer, Tobias Rene Achieving Collaboration In Distributed Systems Deployed Over Selfish Peers - An Illustrative Case Study With Publish/Subscribe Up to a few years ago, the typical operation of a distributed architecture was modelled as the enactment of a collaborative protocol by networked nodes. In this context, all nodes were under the system designer's control, faithfully executing the programmed behaviour. However, today's networks are often characterized by a free aggregation of nodes. Thus, the possibility increases that a selfish party operates a node, which may violate the collaborative protocol in order to increase a personal benefit. If such violations differ from the system goals they can even be considered as attack. Current fault-tolerance techniques may weaken the harmful impact to some degree but they cannot necessarily prevent them. Furthermore, the several architectures differ in their fault-tolerance capabilities. This emphasizes the need for a systematic approach to achieve collaboration in distributed systems. In this PhD thesis we consider the problem of attaining a targeted level of collaboration in a distributed architecture deployed over rational selfish-driven nodes, which have interest in deviating from the communication protocol to increase a personal benefit. In order to reach this goal and to cover a broad spectrum of systems, we do not modify the architecture or communication protocol itself. Instead, we add a monitoring logic to inspect a node's behaviour in terms of the correct interaction with the system. With this approach, the system designer needs to contrast several aspects such as the specific environmental circumstances, the inspection effort or the node's individual preferences. Furthermore, he should consider the fact that each agent could be aware of the other agents' preferences and selfishness, and perform strategic choices consequently. The natural frame for modelling such complex, interdependent and possibly interactive decision landscape is Game Theory (GT). In this context, the monitoring setup proposed in this thesis corresponds to a class of GT models known as Inspection Games (IG). Such games were introduced 1962 in their simplest formulation by Dresher in the context of non-proliferation treatises and arm control. They model the general situation where one inspector verifies through inspections the correct behaviour of another party, called inspectee. However, inspections are costly and the inspector's resources are limited. Hence, a complete surveillance is not possible and an inspector will try to minimize the inspections. Finally, a game strategy combination (violating/inspecting or not) that is considered optimal by the parties represents a Nash equilibrium for the game. In this thesis, the initial IG model is enriched by the possibility of false negatives, i.e. the probability that a violation is not detected during an inspection. Both the initial and the enriched model remain abstract and can thus easily find interdisciplinary application. However, as solution approach in this thesis considering the context of distributed systems, it models the network participants' strategy choice. As outcome, the IG model enables to calculate system parameters in order to shift the Nash equilibrium to the desired target collaboration. The approach is designed as framework. It can be therefore applied to any architecture considering, any selfish goal and any reliability technique. For sake of concreteness, we will discuss the IG approach by means of the illustrative case of a Publish/Subscribe (pub/sub) architecture. In this way messages over the communication infrastructure will have a specific associated semantics. The Inspection Game approach of this thesis secures the whole collaborative protocol in order to attain a correctly working system up to a specific degree (in the sense of collaboration). This represents a completely new way in terms of reliability mechanisms. Hence, this thesis can be considered as fundamental research. In order to enable a broad application, the generality of this approach is supported by further contributions. This is among others the software library RCourse for practical robustness evaluations of overlay networks and a simulation environment for further research of the abstract IG model. All developments will finally be published as open source software. 2013 urn:nbn:de:bvb:739-opus-27118 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-132 Dissertation Rehn-Sonigo, Veronika Multi-criteria Mapping and Scheduling of Workflow Applications onto Heterogeneous Platforms The results summarized in this thesis deal with the mapping and scheduling of workflow applications on heterogeneous platforms. In this context, we focus on three different types of streaming applications: * Replica placement in tree networks * In this kind of application, clients are issuing requests to some servers and the question is where to place replicas in the network such that all requests can be processed. We discuss and compare several policies to place replicas in tree networks, subject to server capacity, Quality of Service (QoS) and bandwidth constraints. The client requests are known beforehand, while the number and location of the servers have to be determined. The standard approach in the literature is to enforce that all requests of a client be served by the closest server in the tree. We introduce and study two new policies. One major contribution of this work is to assess the impact of these new policies on the total replication cost. Another important goal is to assess the impact of server heterogeneity, both from a theoretical and a practical perspective. We establish several new complexity results, and provide several efficient polynomial heuristics for NP-complete instances of the problem. * Pipeline workflow applications * We consider workflow applications that can be expressed as linear pipeline graphs. An example for this application type is digital image processing, where images are treated in steady-state mode. Several antagonist criteria should be optimized, such as throughput and latency (or a combination) as well as latency and reliability (i.e., the probability that the computation will be successful) of the application. While simple polynomial algorithms can be found for fully homogeneous platforms, the problem becomes NP-hard when tackling heterogeneous platforms. We present an integer linear programming formulation for this latter problem. Furthermore, we provide several efficient polynomial bi-criteria heuristics, whose relative performances are evaluated through extensive simulation. As a case-study, we provide simulations and MPI experimental results for the JPEG encoder application pipeline on a cluster of workstations. * Complex streaming applications * We consider the execution of applications structured as trees of operators, i.e., the application of one or several trees of operators in steady-state to multiple data objects that are continuously updated at various locations in a network. A first goal is to provide the user with a set of processors that should be bought or rented in order to ensure that the application achieves a minimum steady-state throughput, and with the objective of minimizing platform cost. We then extend our model to multiple applications: several concurrent applications are executed at the same time in a network, and one has to ensure that all applications can reach their application throughput. Another contribution of this work is to provide complexity results for different instances of the basic problem, as well as integer linear program formulations of various problem instances. The third contribution is the design of several polynomial-time heuristics, for both application models. One of the primary objectives of the heuristics for concurrent applications is to reuse intermediate results shared by multiple applications. 2009 urn:nbn:de:bvb:739-opus-22249 Sonstiger Autor der Fakultät für Informatik und Mathematik OPUS4-740 Dissertation Awwad, Tarek Context-Aware Worker Selection For Efficient Quality Control In Crowdsourcing In the last decade, crowdsourcing has proved its ability to address large scale data collection tasks, such as labeling large data sets, at a low cost and in a short time. However, the performance and behavior variability between workers as well as the variability in task designs and contents, induce an unevenness in the quality of the produced contributions and, thus, in the final output quality. In order to maintain the effectiveness of crowdsourcing, it is crucial to control the quality of the contributions. Furthermore, maintaining the efficiency of crowdsourcing requires the time and cost overhead related to the quality control to be at its lowest. While effective, current quality control techniques such as contribution aggregation, worker selection, context-specific reputation systems, and multi-step workflows, suffer from fairly high time and budget overheads and from their dependency on prior knowledge about individual workers. In this thesis, we address this challenge by leveraging the similarity between completed and incoming tasks as well as the correlation between the worker declarative profiles and their performance in previous tasks in order to perform an efficient task-aware worker selection. To this end, we propose CAWS (Context AwareWorker Selection) method which operates in two phases; in an offline phase, completed tasks are clustered into homogeneous groups for each of which the correlation with the workers declarative profile is learned. Then, in the online phase, incoming tasks are matched to one of the existing clusters and the correspondent, previously inferred profile model is used to select the most reliable online workers for the given task. Using declarative profiles helps eliminate any probing process, which reduces the time and the budget while maintaining the crowdsourcing quality. Furthermore, the set of completed tasks, when compared to a probing task split, provides a larger corpus from which a more precise profile model can be learned. This translates to a better selection quality, especially for harder tasks. In order to evaluate CAWS, we introduce CrowdED (Crowdsourcing Evaluation Dataset), a rich dataset to evaluate quality control methods and quality-driven task vectorization and clustering. The generation of CrowdED relies on a constrained sampling approach that allows to produce a task corpus which respects both, the budget and type constraints. Beside helping in evaluating CAWS, and through its generality and richness, CrowdED helps in plugging the benchmarking gap present in the crowdsourcing quality control community. Using CrowdED, we evaluate the performance of CAWS in terms of the quality of the worker selection and in terms of the achieved time and budget reduction. Results shows the following: first, automatic grouping is able to achieve a learning quality similar to job-based grouping. And second, CAWS is able to outperform the state-of-the-art profile-based worker selection when it comes to quality. This is especially true when strong budget and time constraints are present on the requester side. Finally, we complement our work by a software contribution consisting of an open source framework called CREX (CReate Enrich eXtend). CREX allows the creation, the extension and the enrichment of crowdsourcing datasets. It provides the tools to vectorize, cluster and sample a task corpus to produce constrained task sets and to automatically generate custom crowdsourcing campaign sites. 2018 106 Seiten urn:nbn:de:bvb:739-opus4-7409 Fakultät für Informatik und Mathematik OPUS4-655 Konferenzveröffentlichung Parra Rodriguez, Juan D.; Posegga, Joachim RAPID: Resource and API-Based Detection Against In-Browser Miners Direct access to the system's resources such as the GPU, persistent storage and networking has enabled in-browser crypto-mining. Thus, there has been a massive response by rogue actors who abuse browsers for mining without the user's consent. This trend has grown steadily for the last months until this practice, i.e., CryptoJacking, has been acknowledged as the number one security threat by several antivirus companies. Considering this, and the fact that these attacks do not behave as JavaScript malware or other Web attacks, we propose and evaluate several approaches to detect in-browser mining. To this end, we collect information from the top 330.500 Alexa sites. Mainly, we used real-life browsers to visit sites while monitoring resource-related API calls and the browser's resource consumption, e.g., CPU. Our detection mechanisms are based on dynamic monitoring, so they are resistant to JavaScript obfuscation. Furthermore, our detection techniques can generalize well and classify previously unseen samples with up to 99.99\% precision and recall for the benign class and up to 96\% precision and recall for the mining class. These results demonstrate the applicability of detection mechanisms as a server-side approach, e.g., to support the enhancement of existing blacklists. Last but not least, we evaluated the feasibility of deploying prototypical implementations of some detection mechanisms directly on the browser. Specifically, we measured the impact of in-browser API monitoring on page-loading time and performed micro-benchmarks for the execution of some classifiers directly within the browser. In this regard, we ascertain that, even though there are engineering challenges to overcome, it is feasible and beneficial for users to bring the mining detection to the browser. New York, NY, USA ACM 2018 [14] Seiten Proceedings of the 34th Annual Computer Security Applications Conference 978-1-4503-6569-7 urn:nbn:de:bvb:739-opus4-6550 Fakultät für Informatik und Mathematik OPUS4-461 Dissertation Joblin, Mitchell Structural and Evolutionary Analysis of Developer Networks Large-scale software engineering projects are often distributed among a number sites that are geographically separated by a substantial distance. In globally distributed software projects, time zone issues, language and cultural barriers, and a lack of familiarity among members of different sites all introduce coordination complexity and present significant obstacles to achieving a coordinated effort. For large-scale software engineering projects to satisfy their scheduling and quality goals, many developers must be capable of completing work items in parallel. A key factor to achieving this goal is to remove interdependencies among work items insofar as possible. By applying principles of modularity, work item interdependence can be reduced, but not removed entirely. As a result of uncertainty during the design and implementation phases and incomplete or misunderstood design intents, dependencies between work items inevitably arises and leads to requirements for developers to coordinate. The capacity of a project to satisfy coordination needs depends on how the work items are distributed among developers and how developers are organizationally arranged, among other factors. When coordination requirements fail to be recognized and appropriately managed, anecdotal evidence and prior empirical studies indicate that this condition results in decreased product quality and developer productivity. In essence, properties of the socio-technical environment, comprised of developers and the tasks they must complete, provides important insights concerning the project's capacity to meet product quality and scheduling goals. In this dissertation, we make contributions to support socio-technical analyses of software projects by developing approaches for abstracting and analyzing the technical and social activities of developers. More specifically, we propose a fine-grained, verifiable, and fully automated approach to obtain a proper view on developer coordination, based on commit information and source-code structure, mined from version-control systems. We apply methodology from network analysis and machine learning to identify developer communities automatically. To evaluate our approach, we analyze ten open-source projects with complex and active histories, written in various programming languages. By surveying 53 open-source developers from the ten projects, we validate the accuracy of the extracted developer network and the authenticity of the inferred community structure. Our results indicate that developers of open-source projects form statistically significant community structures and this particular network view largely coincides with developers' perceptions. Equipped with a valid network view on developer coordination, we extend our approach to analyze the evolutionary nature of developer coordination. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. We found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which highly central developers are hierarchically arranged and all other developers are not. Our results suggest that the organizational structure of large software projects is constrained to evolve towards a state that balances the costs and benefits of coordination, and the mechanisms used to achieve this state depend on the project's scale. As a final contribution, we use developer networks to establish a richer understanding of the different roles that developers play in a project. Developers of open-source projects are often classified according to core and peripheral roles. Typically, count-based operationalizations, which rely on simple counts of individual developer activities (e.g., number of commits), are used for this purpose, but there is concern regarding their validity and ability to elicit meaningful insights. To shed light on this issue, we investigate whether count-based operationalizations of developer roles produce consistent results, and we validate them with respect to developers' perceptions by surveying 166 developers. We improve over the state of the art by proposing a relational perspective on developer roles, using our fine-grained developer networks, and by examining developer roles in terms of developers' positions and stability within the developer network. In a study of 10 substantial open-source projects, we found that the primary difference between the count-based and our proposed network-based core--peripheral operationalizations is that the network-based ones agree more with developer perception than count-based ones. Furthermore, we demonstrate that a relational perspective can reveal further meaningful insights, such as that core developers exhibit high positional stability, upper positions in the hierarchy, and high levels of coordination with other core developers, which confirms assumptions of previous work. Overall, our research demonstrates that data stored in software repositories, paired with appropriate analysis approaches, can elicit valuable, practical, and valid insights concerning socio-technical aspects of software development. 2017 192 S. urn:nbn:de:bvb:739-opus4-4616 Fakultät für Informatik und Mathematik OPUS4-36 Dissertation Wiesner, Christian Query Evaluation Techniques for Data Integration Systems In this work we present novel query evaluation techniques for data integration systems in different environments, ranging from a central data-warehouse approach, over distributed virtual market places, to peer-to-peer (P2P) systems. Based on a new distributed evaluation technique, the so-called HyperQueries, we present a reference architecture for distributed virtual market places. These HyperQueries enable us to dynamically construct query evaluation plans by referencing sub-plans in the Internet. Furthermore, the process of data integration is structured. Subsequently, we investigate P2P data integration systems without central instances. We introduce so-called Super-Peers which structure a P2P network. Using this Super-Peer based network we "unroll" queries. This allows us to execute even user-defined operators nearby the data sources. Finally, we propose novel, efficient join algorithms for decision support queries in central data-warehouse systems. The proposed order-preserving hashjoins and generalized hashteams are based on early sorting and early partitioning of the inputs and can speed up the query evaluation up to orders of magnitutes. 2004 urn:nbn:de:bvb:739-opus-406 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-656 Konferenzveröffentlichung Parra Rodriguez, Juan D.; Posegga, Joachim CSP & Co. Can Save Us from a Rogue Cross-Origin Storage Browser Network! But for How Long? We introduce a new browser abuse scenario where an attacker uses local storage capabilities without the website's visitor knowledge to create a network of browsers for persistent storage and distribution of arbitrary data. We describe how security-aware users can use mechanisms such as the Content Security Policy (CSP), sandboxing, and third-party tracking protection, i.e., CSP & Company, to limit the network's effectiveness. From another point of view, we also show that the upcoming Suborigin standard can inadvertently thwart existing countermeasures, if it is adopted. New York, NY, USA ACM 2017 3 Seiten Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy 978-1-4503-5632-9 urn:nbn:de:bvb:739-opus4-6561 10.1145/3176258.3176951 Fakultät für Informatik und Mathematik OPUS4-767 Dissertation Gerl, Armin Modelling of a Privacy Language and Efficient Policy-based De-identification The processing of personal information is omnipresent in our data-driven society enabling personalized services, which are regulated by privacy policies. Although privacy policies are strictly defined by the General Data Protection Regulation (GDPR), no systematic mechanism is in place to enforce them. Especially if data is merged from several sources into a data-set with different privacy policies associated, the management and compliance to all privacy requirements is challenging during the processing of the data-set. Privacy policies can vary hereby due to different policies for each source or personalization of privacy policies by individual users. Thus, the risk for negligent or malicious processing of personal data due to defiance of privacy policies exists. To tackle this challenge, a privacy-preserving framework is proposed. Within this framework privacy policies are expressed in the proposed Layered Privacy Language (LPL) which allows to specify legal privacy policies and privacy-preserving de-identification methods. The policies are enforced by a Policy-based De-identification (PD) process. The PD process enables efficient compliance to various privacy policies simultaneously while applying pseudonymization, personal privacy anonymization and privacy models for de-identification of the data-set. Thus, the privacy requirements of each individual privacy policy are enforced filling the gap between legal privacy policies and their technical enforcement. 2019 xviii, 257 Seiten urn:nbn:de:bvb:739-opus4-7674 Fakultät für Informatik und Mathematik OPUS4-793 Dissertation Ganser, Stefan Iterative Schedule Optimization for Parallelization in the Polyhedron Model In high-performance computing, one primary objective is to exploit the performance that the given target hardware can deliver to the fullest. Compilers that have the ability to automatically optimize programs for a specific target hardware can be highly useful in this context. Iterative (or search-based) compilation requires little or no prior knowledge and can adapt more easily to concrete programs and target hardware than static cost models and heuristics. Thereby, iterative compilation helps in situations in which static heuristics do not reflect the combination of input program and target hardware well. Moreover, iterative compilation may enable the derivation of more accurate cost models and heuristics for optimizing compilers. In this context, the polyhedron model is of help as it provides not only a mathematical representation of programs but, more importantly, a uniform representation of complex sequences of program transformations by schedule functions. The latter facilitates the systematic exploration of the set of legal transformations of a given program. Early approaches to purely iterative schedule optimization in the polyhedron model do not limit their search to schedules that preserve program semantics and, thereby, suffer from the need to explore numbers of illegal schedules. More recent research ensures the legality of program transformations but presumes a sequential rather than a parallel execution of the transformed program. Other approaches do not perform a purely iterative optimization. We propose an approach to iterative schedule optimization for parallelization and tiling in the polyhedron model. Our approach targets loop programs that profit from data locality optimization and coarse-grained loop parallelization. The schedule search space can be explored either randomly or by means of a genetic algorithm. To determine a schedule's profitability, we rely primarily on measuring the transformed code's execution time. While benchmarking is accurate, it increases the time and resource consumption of program optimization tremendously and can even make it impractical. We address this limitation by proposing to learn surrogate models from schedules generated and evaluated in previous runs of the iterative optimization and to replace benchmarking by performance prediction to the extent possible. Our evaluation on the PolyBench 4.1 benchmark set reveals that, in a given setting, iterative schedule optimization yields significantly higher speedups in the execution of the program to be optimized. Surrogate performance models learned from training data that was generated during previous iterative optimizations can reduce the benchmarking effort without strongly impairing the optimization result. A prerequisite for this approach is a sufficient similarity between the training programs and the program to be optimized. 2019 xvii, 176 Seiten urn:nbn:de:bvb:739-opus4-7936 Fakultät für Informatik und Mathematik OPUS4-999 Dissertation Niedermeier, Florian Power-Adaptive Computing in Future Energy Networks The current electricity grid is undergoing major changes. There is increasing pressure to move away from power generation from fossil fuels, both due to ecological concerns and fear of dependencies on scarce natural resources. Increasing the share of decentralized generation from renewable sources is a widely accepted way to a more sustainable power infrastructure. However, this comes at the price of new challenges: generation from solar or wind power is not controllable and only forecastable with limited accuracy. To compensate for the increasing volatility in power generation, exerting control on the demand side is a promising approach. By providing flexibility on demand side, imbalances between power generation and demand may be mitigated. This work is concerned with developing methods to provide grid support on demand side while limiting the associated costs. This is done in four major steps: first, the target power curve to follow is derived taking both goals of a grid authority and costs of the respective load into account. In the following, the special case of data centers as an instance of significant loads inside a power grid are focused on more closely. Data center services are adapted in a way such as to achieve the previously derived power curve. By means of hardware power demand models, the required adaptation of hardware utilization can be derived. The possibilities of adapting software services are investigated for the special use case of live video encoding. A method to minimize quality of experience loss while reducing power demand is presented. Finally, the possibility of applying probabilistic model checking to a continuous demand-response scenario is demonstrated. 2020 xvi, 204 Seiten urn:nbn:de:bvb:739-opus4-9993 Fakultät für Informatik und Mathematik OPUS4-822 Dissertation Keren, Gil Neural Network Supervision: Notes on Loss Functions, Labels and Confidence Estimation We consider a number of enhancements to the standard neural network training paradigm. First, we show that carefully designed parameter update rules may replace the need for a loss function and its gradient. We introduce a parameter update rule that generalises the standard cross-entropy gradient, and allows directly controlling the relative effect of easy and hard examples on the training process. We show that the proposed update rule cannot be derived by using a loss function and yields better classification accuracy compared to training with the standard cross-entropy loss. In addition, we study the effect of the loss function choice on the learnt representations. We introduce the Single Logit Classification (SLC) task: classifying whether a given class is the correct class for a given example, in a computationally efficient manner, based on the appropriate class logit alone. A natural principle is proposed, the Principle of Logit Separation (PoLS), as a guideline for choosing and designing loss functions suitable for the SLC task. We mathematically analyse the alignment of eleven existing and novel loss functions with this principle. Experiment results show that using loss functions that are aligned with this principle results in a representation in the logits layer in which each logit is more informative of its class correctness, leading to a considerably better SLC accuracy. Further, we attempt to alleviate the dependency of standard neural network models on large amounts of quality labels. The task of weakly supervised one-shot detection is considered, in which at training time the model is trained without any localisation labels, and at test time it needs to identify and localise instances of unseen classes. We propose the attention similarity networks (ASN) for this task. ASN use a Siamese neural network to compute a similarity score between an exemplar and different locations in a target example. Then, an attention mechanism performs localisation by learning to attend to the correct locations. The ASN model outperforms the relevant baselines for weakly supervised one-shot detection tasks in the audio and computer vision domains. Finally, we consider the problem of quantifying prediction confidence in the regression setting. We propose two novel algorithms for emitting calibrated prediction intervals for neural network regressors, at any given confidence level. The two algorithms require binning of the output space and training the neural network regressor as a classifier. Then, the calibration algorithms choose the intervals in the output space, making sure they contain the amount of posterior probability mass that results in the desired confidence level. 2020 ix, 98 Seiten urn:nbn:de:bvb:739-opus4-8223 Fakultät für Informatik und Mathematik OPUS4-226 Dissertation Auer, Christopher Planar Graphs and their Duals on Cylinder Surfaces In this thesis, we investigates plane drawings of undirected and directed graphs on cylinder surfaces. In the case of undirected graphs, the vertices are positioned on a line that is parallel to the cylinder's axis and the edge curves must not intersect this line. We show that a plane drawing is possible if and only if the graph is a double-ended queue (deque) graph, i. e., the vertices of the graph can be processed according to a linear order and the edges correspond to items in the deque inserted and removed at their end vertices. A surprising consequence resulting from these observations is that the deque characterizes planar graphs with a Hamiltonian path. This result extends the known characterization of planar graphs with a Hamiltonian cycle by two stacks. By these insights, we also obtain a new characterization of queue graphs and their duals. We also consider the complexity of deciding whether a graph is a deque graph and prove that it is NP-complete. By introducing a split operation, we obtain the splittable deque and show that it characterizes planarity. For the proof, we devise an algorithm that uses the splittable deque to test whether a rotation system is planar. In the case of directed graphs, we study upward plane drawings where the edge curves follow the direction of the cylinder's axis (standing upward planarity; SUP) or they wind around the axis (rolling upward planarity; RUP). We characterize RUP graphs by means of their duals and show that RUP and SUP swap their roles when considering a graph and its dual. There is a physical interpretation underlying this characterization: A SUP graph is to its RUP dual graph as electric current passing through a conductor to the magnetic field surrounding the conductor. Whereas testing whether a graph is RUP is NP-hard in general [Bra14], for directed graphs without sources and sink, we develop a linear-time recognition algorithm that is based on our dual graph characterization of RUP graphs. 2014 urn:nbn:de:bvb:739-opus-27430 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-370 Konferenzveröffentlichung Zwicklbauer, Stefan; Seifert, Christin; Granitzer, Michael Robust and Collective Entity Disambiguation through Semantic Embeddings Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. We propose a new collective, graph-based disambiguation algorithm utilizing semantic entity and document embeddings for robust entity disambiguation. Robust thereby refers to the property of achieving better than state-of-the-art results over a wide range of very different data sets. Our approach is also able to abstain if no appropriate entity can be found for a specific surface form. Our evaluation shows, that our approach achieves significantly (>5%) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms. 2016 urn:nbn:de:bvb:739-opus4-3704 Fakultät für Informatik und Mathematik OPUS4-367 unpublished Zwicklbauer, Stefan; Seifert, Christin; Granitzer, Michael DoSeR - A Knowledge-Base-Agnostic Framework for Entity Disambiguation Using Semantic Embeddings Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. In this work, we propose DoSeR (Disambiguation of Semantic Resources), a (named) entity disambiguation framework that is knowledge-base-agnostic in terms of RDF (e.g. DBpedia) and entity-annotated document knowledge bases (e.g. Wikipedia). Initially, our framework automatically generates semantic entity embeddings given one or multiple knowledge bases. In the following, DoSeR accepts documents with a given set of surface forms as input and collectively links them to an entity in a knowledge base with a graph-based approach. We evaluate DoSeR on seven different data sets against publicly available, state-of-the-art (named) entity disambiguation frameworks. Our approach outperforms the state-of-the-art approaches that make use of RDF knowledge bases and/or entity-annotated document knowledge bases by up to 10% F1 measure. 2016 urn:nbn:de:bvb:739-opus4-3670 Fakultät für Informatik und Mathematik OPUS4-8 Dissertation Wichert, Carl-Alexander ULTRA - A Logic Transaction Programming Language Rule-based language for the specification of complex database updates and transactions. Formal treatment of the syntax and the declarative semantics 2000 urn:nbn:de:bvb:739-opus-105 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-40 Dissertation Ellmenreich, Nils PolyAPM: Comparative Parallel Programming with Abstract Parallel Machines A parallelising compilation consists of many translation and optimisation stages. The programmer may steer the compiler through these stages by supplying directives with the source code or setting compiler switches. However, for an evaluation of the effects of individual stages, their selection and their best order, this approach is not optimal. To solve this problem, we propose the following method. The compilation is cast as a sequence of program transformations. Each intermediate program runs on an Abstract Parallel Machine (APM), while the program generated by the final transformation runs on the target architecture. Our intermediate programs are all in the same language, Haskell. Thus, each program is executable and still abstract enough to be legible, which enables the evaluation of the transformation that generated it. This evaluation is supported by a cost model, which makes a performance prediction of the abstract program for a real machine. Our project, PolyAPM, provides an acyclic directed graph -- usually a tree -- of APMs whose traversal specifies different combinations and orders of transformations. From one source program, several target programs can be constructed. Their run time characteristics can be evaluated and compared. The goal of PolyAPM is not to support the one-off construction of parallel application programs. For the method's overhead to pay off, the project aims rather at supporting the construction and comparison of many similar variations of a parallel program and a comparative evaluation of parallelisation techniques. With the automation of transformations, PolyAPM can also be used to construct semi-automatic compilation systems. 2004 urn:nbn:de:bvb:739-opus-447 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-30 Dissertation Mian Syed, Ahmed Eine ökonomische statische Analysemethode zur Berechnung von Relational Attributes mittels regulärer Pfadbedingungen und ihre Anwendung auf Zeigeranalyse Ziel der Arbeit ist es, eine neue Programmanalysetechnik für Zeigeranalyse zu entwickeln. Diese soll exakt in dem Sinne sein, daß sie nur Ergebnisse berechnet, die tatsächlich in realen Programmläufen vorkommen können, Ebenso soll diese Analysetechnik ökonomisch sein, d.h. nur den minimal für eine exakte Lösung benötigten Berechnungsaufwand investieren müssen. 2003 urn:nbn:de:bvb:739-opus-333 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-55 Dissertation Raitner, Marcus Efficient visual navigation of hierarchically structured graphs Visual navigation of hierarchically structured graphs is a technique for interactively exploring large graphs that possess an additional hierarchical structure. This structure is expressed in form of a recursive clustering of the nodes: in call graphs of telephone networks, for instance, the nodes are identified with phone numbers; they are clustered recursively through the implicit structure of the numbers, e. g., nodes with the same area code belong to a cluster. In order to reduce the complexity and the size of the graph, only those subgraphs that are currently needed are shown in detail, while the others are collapsed, i. e., represented by meta nodes. In such a graph view the subgraphs in the areas of interest are expanded furthest, whereas those on the periphery are abstracted. As the areas of interest change over time, clusters in a view need to be expanded or contracted. First and foremost, there is need for an efficient data structure for this graph view maintenance problem. Depending on the admissible modifications of the graph and its hierarchical clustering, three variants have been discussed in the literature: in the static case, everything is fixed; in the dynamic graph variant, only edges of the graph can be inserted and deleted; finally, in the dynamic graph and tree variant the graph additionally is subject to node insertions and deletions and the clustering may change through splitting and merging of clusters. We introduce a new variant, dynamic leaves, which is based on the dynamic graph variant, but additionally allows insertion and deletion of graph nodes, i. e., leaves of the hierarchy. So far efficient data structures were known only for the static and the dynamic graph variant, i. e., neither the nodes of the graph nor the clustering could be modified. As this is unsatisfactory in an interactive editor for hierarchically structured graphs, we first generalize the approach of Buchsbaum et. al (Proc. 8th ESA, vol. 1879 of LNCS, pp. 120-131, 2000), in which graph view maintenance is formulated as a special case of range searching over tree cross products, to the new dynamic leaves variant. This generalization builds on a novel technique of superimposing a search tree over an ordered list maintenance structure. With an additional factor of roughly O(log n/log log n), this is the first data structure for the problem of graph view maintenance where the node set is dynamic. Visualizing the expanding and contracting appropriately is the second challenge. We propose a local update scheme for the algorithm of Sugiyama and Misue (IEEE Trans. on Systems, Man, and Cybernetics 21 (1991) 876- 892) for drawing compound digraphs. The layered drawings that it produces have many applications ranging from biochemical pathways to UML diagrams. Modifying the intermediate results of every step of the original algorithm locally, the update scheme is more efficient than re-applying the entire algorithm after expansion or contraction. As our experimental results on randomly generated graphs show, the average time for updating the drawing is around 50 % of the time for redrawing for dense graphs and below 20 % for sparse graphs. Also, the performance gain is not at the expense of quality as regards the area of the drawing, which increases only insignificantly, and the number of crossings, which is reduced. At the same time, the locality of the updates preserves the user 's mental map of the graph: nodes that are are not affected stay on the same level in the same relative order and expanded edges take the same course as the corresponding contracted edge; furthermore, expansion and contraction are visually inverse. Finally, our new data structure and the update scheme are combined into an interactive editor and viewer for compound (di-)graphs. A flexible and extensible software architecture is introduced that lays the ground for future research. It employs the well-known Model-View-Controller (MVC) paradigm to separate the abstract data from its presentation. As a consequence, the purely combinatorial parts, i. e., the compound (di-)graph and its views, are reusable without the editor front-end. A proof-of-concept implementation based on the proposed architecture shows its feasibility and suitability. 2005 urn:nbn:de:bvb:739-opus-658 Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik OPUS4-198 Masterarbeit / Diplomarbeit Stadler, Bernhard Ein graphbasierter Formalismus zur Programmmanipulation In dieser Arbeit wird ein Formalismus zur implementierungs-, sprach- und repräsentationsunabhängigen Beschreibung von Programmmanipulationen wie beispielsweise Merge, Featurekomposition und Interaktionsanalyse entwickelt. Dieser soll insbesondere der anwendungsnahen und wiederverwendbaren Definition von Programmanipulationen dienen. Programmmanipulationen werden auf repräsentationsunabhängige, deklarative Art mit Mitteln der Kategorientheorie beschrieben. Auf Grundlage einer Analyse des Text-To-Model-Frameworks EMFText werden Graphvarianten aus der algebraischen Graphtransformationstheorie zur Programmrepräsentation ausgewählt. Diese Graphvarianten werden anhand ihrer charakteristischen Merkmale in Graphfeatures aufgespalten. Auf Grundlage von Sketches, einer graphbasierten und kategorientheoretischen Art der algebraischen Spezifikation, wird eine Vorgehensweise für die formale Definition von Graphfeatures entwickelt. Diese Graphfeatures kann man auf flexible Weise zu sprachübergreifenden graphbasierten Programmrepräsentationen kombinieren. Als Anwendungen des Formalismus wurden der Merge und die Featurekomposition beschrieben. 2012 urn:nbn:de:bvb:739-opus-27128 10.15475/opus-198 Sonstiger Autor der Fakultät für Informatik und Mathematik