TY - THES A1 - Gerl, Armin T1 - Modelling of a Privacy Language and Efficient Policy-based De-identification N2 - The processing of personal information is omnipresent in our data-driven society enabling personalized services, which are regulated by privacy policies. Although privacy policies are strictly defined by the General Data Protection Regulation (GDPR), no systematic mechanism is in place to enforce them. Especially if data is merged from several sources into a data-set with different privacy policies associated, the management and compliance to all privacy requirements is challenging during the processing of the data-set. Privacy policies can vary hereby due to different policies for each source or personalization of privacy policies by individual users. Thus, the risk for negligent or malicious processing of personal data due to defiance of privacy policies exists. To tackle this challenge, a privacy-preserving framework is proposed. Within this framework privacy policies are expressed in the proposed Layered Privacy Language (LPL) which allows to specify legal privacy policies and privacy-preserving de-identification methods. The policies are enforced by a Policy-based De-identification (PD) process. The PD process enables efficient compliance to various privacy policies simultaneously while applying pseudonymization, personal privacy anonymization and privacy models for de-identification of the data-set. Thus, the privacy requirements of each individual privacy policy are enforced filling the gap between legal privacy policies and their technical enforcement. KW - Privacy Language KW - Personal Privacy KW - Privacy-Preservation KW - GDPR KW - LPL KW - Datenschutz KW - Anonymisierung KW - Pseudonymisierung KW - Formale Sprache KW - Datenschutzgesetz Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7674 ER - TY - THES A1 - Jurgovsky, Johannes T1 - Context-Aware Credit Card Fraud Detection N2 - Credit card fraud has emerged as major problem in the electronic payment sector. In this thesis, we study data-driven fraud detection and address several of its intricate challenges by means of machine learning methods with the goal to identify fraudulent transactions that have been issued illegitimately on behalf of the rightful card owner. In particular, we explore several means to leverage contextual information beyond a transaction’s basic attributes on the transaction level, sequence level and user level. On the transaction level, we aim to identify fraudulent transactions which, in terms of their attribute values, are globally distinguishable from genuine transactions. We provide an empirical study of the influence of class imbalance and forecasting horizons on the classification performance of a random forest classifier. We augment transactions with additional features extracted from external knowledge sources and show that external information about countries and calendar events improves classification performance most noticeably on card-not-present transactions. On the sequence level, we aim to detect frauds that are inconspicuous in the background of all transactions but peculiar with respect to the short-term sequence they appear in. We use a Long Short-term Memory network (LSTM) for modeling the sequential succession of transactions. Our results suggest that LSTM-based modeling is a promising strategy for characterizing sequences of card-present transactions but it is not adequate for card-not-present transactions. On the user level, we elaborate on feature aggregations and propose a flexible concept allowing us define numerous features by means of a simple syntax. We provide a CUDA-based implementation for the computationally expensive extraction with a speed-up of two orders of magnitude over a single-core implementation. Our feature selection study reveals that aggregates extracted from users’ transaction sequences are more useful than those extracted from merchant sequences. Moreover, we discover multiple sets of candidate features with equivalent performance as manually engineered aggregates while being structurally different. Regarding future work, we motivate the usage of simple and transparent machine learning methods for credit card fraud detection and we sketch a simple user-focused modeling approach. KW - Credit Card Fraud Detection KW - Machine Learning KW - Data Augmentation KW - Feature Engineering KW - Kreditkartenmissbrauch KW - Computersicherheit Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7622 ER - TY - THES A1 - Klaus, Tina T1 - Complexity Analysis of Quantizations of Multidimensional Stochastic Differential Equations N2 - The dissertation is located in the field of quantizations of certain stochastic processes, namely a solution X of a multidimensional stochastic differential equation (SDE). The quantization problem for X consists in approximating X by a a random element which takes only finitely many values. Our main interest lies in the investigation of the asymptotic behavior of the Nth minimal quantization error of X as N tends to infinity, which incorporates the determination of both the sharp rate of convergence and explicit asymptotic constants. Especially explicit asymptotic constants have been so far unknown in the context of multidimensional SDEs. Furthermore, as part of our analysis, we provide a method which yields a strongly asymptotically optimal sequence of N-quantization of X. In certain special cases our method is fully constructive and the algorithm is easy to implement. KW - Stochastische Differentialgleichung KW - Komplexität KW - Quantifizierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7665 ER - TY - THES A1 - Charpenay, Victor T1 - Semantics for the Web of Things: Modeling the Physical World as a Collection of Things and Reasoning with their Descriptions N2 - The main research question of this thesis is to develop a theory that would provide foundations for the development of Web of Things (WoT) systems. A theory for WoT shall provide a model of the ‘things’ WoT agents relate to such that these relations determine what interactions take place between these agents. This thesis presents a knowledge-based approach in which the semantics of WoT systems is given by a transformation (an homomorphism) between a graph representing agent interactions and a knowledge graph describing ‘things’. It focuses on three aspects of knowledge graphs in particular: the vocabulary with which assertions can be made, the rules that can be defined over this vocabulary and its serialization to efficiently exchange pieces of a knowledge graph. Each aspect is developed in a dedicated chapter, with specific contributions to the state-of-the-art. The need for a unified vocabulary to describe ‘things’ in WoT and the Internet of Things (IoT) has been identified early on in the literature. Many proposals have been consequently published, in the form of Web ontologies. In Ch. 2, a systematic review of these proposals is being developed, as well as a comparison with the data models of the principal IoT frameworks and protocols. The contribution of the thesis in that respect is an alignment between the Thing Description (TD) model and the Semantic Sensor Network (SSN) ontology, two standards of the World Wide Web Consortium (W3C). The scope of this thesis is generally limited to Web standards, especially those defined by the Resource Description framework (RDF). Web ontologies do not only expose a vocabulary but also rules to extend a knowledge graph by means of reasoning. Starting from a set of TD documents, new relations between ‘things’ can be “discovered” this way, indicating possible interactions between the servients that relate to them. The experiments presented in Ch. 3 were done on the basis of this semantic discovery framework on two use cases: a building automation use case provided by Intel Labs and an industrial control use case developed internally at Siemens. The relations to discover often involve anonymous nodes in the knowledge graph: the chapter also introduces a novel skolemization algorithm to correctly process these nodes on a well-defined fragment of the Web Ontology Language (OWL). Finally, because this semantic discovery framework relies on the exchange of TD documents, Ch. 4 introduces a binary format for RDF that proves efficient in serializing TD assertions such that even the smallest WoT agents, i.e. micro-controllers, can store and process them. A formalization for the semantics-preserving compaction and querying of TD documents is also introduced in this chapter, at the basis of an embedded RDF store called the µRDF store. The ability of all WoT agents to query logical assertions about themselves and their environment, as found in TD documents, is a first step towards knowledge-based intelligent systems that can operate autonomously and dynamically in a decentralized way. The µRDF store is an attempt to illustrate the practical outcomes of the theory of WoT developed throughout this thesis. N2 - Die Dissertation entwickelt eine theoretische Grundlage für die Spezifikation Web of Things (WoT)-Systemen. Die Spezifikation der WoT-Systeme basiert auf einem Modell für die Dinge, oder Things, mit denen WoT-Agenten Beziehungen schaffen, welche Interaktionen zwischen Agenten erlauben. Diese Dissertation stellt einen wissensbasierten Ansatz vor, in dem die Semantik von WoT-Systemen als eine Transformation von einem Graphen von Agenten-Interaktionen nach einem Knowledge Graph definiert ist. Diese Arbeit deckt genau drei Aspekte Knowledge Graphs ab: das Vokabular, mit dem logische Schlüsse formuliert werden, die Regeln, die auf einem Vokabular basieren können und Serialisierung, um den effizienten Austausch zwischen Teilen eines Knowledge Graph zu ermöglichen. Alle drei Aspekt werden mit ihren wissenschaftlichen Beitrag in einem eigenen Kapitel adressiert. Der Bedarf an einem vereinigten Vokabular, um im WoT und dem Internet of Things (IoT) Things zu beschreiben, wurden in der Literatur frühzeitig identifiziert. Viele Ansatze wurden diesbezüglich vor allem als Web Ontologien veröffentlicht. Im Kapitel 2 werden diese Ansätze miteinander, sowie mit Datenmodelle dominierender IoT-Frameworks und Protokolle verglichen. Der Beitrag der Dissertation diesbezüglich ist die Verschmelzung des WoT Thing Description (TD) Modells und der Semantic Sensor Network (SSN) Ontologie, zwei vom World Wide Web Consortium (W3C) veröffentlichte Standards, in eine einzige Ontologie. Der Rahmen dieser Dissertation wird auf Web Standards begrenzt, insbesondere im Resource Description Framework (RDF) enthaltenen Standards. Web Ontologien bestehen nicht nur aus einm Vokabular, sondern auch aus Regel, um einen Knowledge Graphen durch Inferenz zu erweitern. Anhand einer Menge von TD-Dokumenten können neue Beziehungen zwischen Things abgeleitet werden und dadurch neue Interaktionen zwischen denjenigen Agenten, die sich auf diese Things beziehen eingeführt werden. Die im Kapitel 3 beschriebenen Experimente setzen dieses semantische Framework in zwei Domäne um: Gebäudeautomatisierung und Industrielle Kontrollsysteme. Das Erkennen impliziter Beziehungen zwischen Things hängt in bestimmten Fällen von sogenannten anonymen Knoten im Graphen ab: das Kapitel führt einen neuen Skolemization Algorithmus ein, um diese Knoten für einen bestimmten Teil der Web Ontology Language (OWL) korrekt zu verarbeiten. Zum Schluss, da die Umsetzung dieses semantischen Frameworks den Austausch von TD-Dokumenten erfordert, wird im Kapitel 4 ein binäres Format für RDF eingeführt, welches sich als sehr effizient für die Serialisierung erweist, damit auch kleine WoT Agenten, nämlich Mikrocontroller, TD-Dokumente speichern und verarbeiten können. Eine formale Definition für die Verdichtung und die Abfrage von TD-Dokumenten wird in diesem Kapitel eingeführt. Das Kapitel beschreibt auch die Implementierung einer eingebetteten RDF Datenbank, die µRDF Store genannt wurde. Die Fähigkeit WoT-Agenten logische Schlüsse über sich selbst und ihre Umgebung zu ziehen ist der erste Schritt in Richtung eines wissensbasierten intelligenten Systems, das autonom, dynamisch und dezentral agieren kann. Der µRDF Store zeigt die praktischen Vorteile, der in dieser Dissertation entwickelten Theorie für WoT auf. KW - Semantic Web KW - Web of Things KW - Internet of Things KW - Thing Description KW - Web Ontologies KW - Semantic Web KW - Internet der Dinge Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7578 ER - TY - THES A1 - Wahl, Florian T1 - Methods for monitoring the human circadian rhythm in free-living N2 - Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle. KW - context recognition KW - human circadian rhythm KW - machine learning KW - sleep timing KW - smart eyeglasses KW - Tagesrhythmus Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7607 ER - TY - THES A1 - Kronawitter, Stefan T1 - Automatic Performance Optimization of Stencil Codes N2 - A widely used class of codes are stencil codes. Their general structure is very simple: data points in a large grid are repeatedly recomputed from neighboring values. This predefined neighborhood is the so-called stencil. Despite their very simple structure, stencil codes are hard to optimize since only few computations are performed while a comparatively large number of values have to be accessed, i.e., stencil codes usually have a very low computational intensity. Moreover, the set of optimizations and their parameters also depend on the hardware on which the code is executed. To cut a long story short, current production compilers are not able to fully optimize this class of codes and optimizing each application by hand is not practical. As a remedy, we propose a set of optimizations and describe how they can be applied automatically by a code generator for the domain of stencil codes. A combination of a space and time tiling is able to increase the data locality, which significantly reduces the memory-bandwidth requirements: a standard three-dimensional 7-point Jacobi stencil can be accelerated by a factor of 3. This optimization can target basically any stencil code, while others are more specialized. E.g., support for arbitrary linear data layout transformations is especially beneficial for colored kernels, such as a Red-Black Gauss-Seidel smoother. On the one hand, an optimized data layout for such kernels reduces the bandwidth requirements while, on the other hand, it simplifies an explicit vectorization. Other noticeable optimizations described in detail are redundancy elimination techniques to eliminate common subexpressions both in a sequence of statements and across loop boundaries, arithmetic simplifications and normalizations, and the vectorization mentioned previously. In combination, these optimizations are able to increase the performance not only of the model problem given by Poisson’s equation, but also of real-world applications: an optical flow simulation and the simulation of a non-isothermal and non-Newtonian fluid flow. KW - Optimierung KW - Codegenerierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7618 ER - TY - THES A1 - Stadler, Thomas T1 - Eine Anwendung der Invariantentheorie auf das Korrespondenzproblem lokaler Bildmerkmale N2 - Als sich in der ersten Hälfte des 19. Jahrhunderts zunehmend mehr bedeutende Mathematiker mit der Suche nach Invarianten beschäftigten, konnte natürlich noch niemand vorhersehen, dass die Invariantentheorie mit Beginn des Computerzeitalters in der Bildverarbeitung bzw. dem Rechnersehen ein äußerst fruchtbares Anwendungsgebiet finden wird. In dieser Arbeit wird eine neue Anwendungsmöglichkeit der Invariantentheorie in der Bildverarbeitung vorgestellt. Dazu werden lokale Bildmerkmale betrachtet. Dabei handelt es sich um die Koordinaten einer Polynomfunktion bzgl. einer geeigneten Orthonormalbasis von P_n(R^2,R), die die zeitintegrierte Sensorinputfunktion auf lokalen Pixelfenstern bestmöglich approximiert. Diese Bildmerkmale werden in vielen Anwendungen eingesetzt, um Objekte in Bildern zu erkennen und zu lokalisieren. Beispiele hierfür sind die Detektion von Werkstücken an einem Fließband oder die Verfolgung von Fahrbahnmarkierungen in Fahrerassistenzsystemen. Modellieren lässt sich die Suche nach einem Muster in einem Suchbild als Paar von Stereobildern, auf denen lokal die affin-lineare Gruppe AGL(R) operiert. Will man also feststellen, ob zwei lokale Pixelfenster in etwa Bilder eines bestimmten dreidimensionalen Oberflächenausschnitts sind, ist zu klären, ob die Bildausschnitte durch eine Operation der Gruppe AGL(R) näherungsweise ineinander übergeführt werden können. Je nach Anwendung genügt es bereits, passende Untergruppen G von AGL(R) zu betrachten. Dank der lokalen Approximation durch Polynomfunktionen induziert die Operation einer Untergruppe G eine Operation auf dem reellen Vektorraum P_n(R^2,R). Damit lässt sich das Korrespondenzproblem auf die Frage reduzieren, ob es eine Transformation T in G gibt so, dass p ungefähr mit der Komposition von q und T für die zugehörigen Approximationspolynome p,q in P_n(R^2,R) gilt. Mit anderen Worten, es ist zu klären, ob sich p und q näherungsweise in einer G-Bahn befinden, eine typische Fragestellung der Invariantentheorie. Da nur lokale Bildausschnitte betrachtet werden, genügt es weiter, Untergruppen G von GL_2(R) zu betrachten. Dann erhält man sofort auch die Antwort für das semidirekte Produkt von R^2 mit G. Besonders interessant für Anwendungen ist hierbei die spezielle orthogonale Gruppe G=SO_2(R) und damit insgesamt die eigentliche Euklidische Gruppe. Für diese Gruppe und spezielle Pixelfenster ist das Korrespondenzproblem bereits gelöst. In dieser Arbeit wird das Problem in eben dieser Konstellation ebenfalls gelöst, allerdings auf elegante Weise mit Methoden der Invariantentheorie. Der Ansatz, der hier vorgestellt wird, ist aber nicht auf diese Gruppe und spezielle Pixelfenster begrenzt, sondern leicht auf weitere Fälle erweiterbar. Dazu ist insbesondere zu klären, wie sich sogenannte fundamentale Invarianten von lokalen Bildmerkmalen, also letztendlich Invarianten von Polynomfunktionen, berechnen lassen, d.h. Erzeugendensysteme der entsprechenden Invariantenringe. Mit deren Hilfe lässt sich die Zugehörigkeit einer Polynomfunktion zur Bahn einer anderen Funktion auf einfache Weise untersuchen. Neben der Vorstellung des Verfahrens zur Korrespondenzfindung und der dafür notwendigen Theorie werden in dieser Arbeit Erzeugendensysteme von Invariantenringen untersucht, die besonders "schöne" Eigenschaften besitzen. Diese schönen Erzeugendensysteme von Unteralgebren werden, analog zu Gröbner-Basen als Erzeugendensysteme von Idealen, SAGBI-Basen genannt ("Subalgebra Analogs to Gröbner Bases for Ideals"). SAGBI-Basen werden hier insbesondere aus algorithmischer Sicht behandelt, d.h. die Berechnung von SAGBI-Basen steht im Vordergrund. Dazu werden verschiedene Algorithmen erarbeitet, deren Korrektheit bewiesen und implementiert. Daraus resultiert ein Software-Paket zu SAGBI-Basen für das Computeralgebrasystem ApCoCoA, dessen Funktionalität in diesem Umfang in keinem Computeralgebrasystem zu finden sein wird. Im Zuge der Umsetzung der einzelnen Algorithmen konnte außerdem die Theorie der SAGBI-Basen an zahlreichen Stellen erweitert werden. KW - Computeralgebra KW - SAGBI-Basen KW - Invariantentheorie KW - Bildverarbeitung KW - (lokale) Bildmerkmale Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4026 ER - TY - THES A1 - de Ponte Müller, Fabian T1 - Cooperative Relative Positioning for Vehicular Environments N2 - Fahrerassistenzsysteme sind ein wesentlicher Baustein zur Steigerung der Sicherheit im Straßenverkehr. Vor allem sicherheitsrelevante Applikationen benötigen eine genaue Information über den Ort und der Geschwindigkeit der Fahrzeuge in der unmittelbaren Umgebung, um mögliche Gefahrensituationen vorherzusehen, den Fahrer zu warnen oder eigenständig einzugreifen. Repräsentative Beispiele für Assistenzsysteme, die auf eine genaue, kontinuierliche und zuverlässige Relativpositionierung anderer Verkehrsteilnehmer angewiesen sind, sind Notbremsassitenten, Spurwechselassitenten und Abstandsregeltempomate. Moderne Lösungsansätze benutzen Umfeldsensorik wie zum Beispiel Radar, Laser Scanner oder Kameras, um die Position benachbarter Fahrzeuge zu schätzen. Dieser Sensorsysteme gemeinsame Nachteile sind deren limitierte Erfassungsreichweite und die Notwendigkeit einer direkten und nicht blockierten Sichtlinie zum Nachbarfahrzeug. Kooperative Lösungen basierend auf einer Fahrzeug-zu-Fahrzeug Kommunikation können die eigene Wahrnehmungsreichweite erhöhen, in dem Positionsinformationen zwischen den Verkehrsteilnehmern ausgetauscht werden. In dieser Dissertation soll die Möglichkeit der kooperativen Relativpositionierung von Straßenfahrzeugen mittels Fahrzeug-zu-Fahrzeug Kommunikation auf ihre Genauigkeit, Kontinuität und Robustheit untersucht werden. Anstatt die in jedem Fahrzeug unabhängig ermittelte Position zu übertragen, werden in einem neuartigem Ansatz GNSS-Rohdaten, wie Pseudoranges und Doppler-Messungen, ausgetauscht. Dies hat den Vorteil, dass sich korrelierte Fehler in beiden Fahrzeugen potentiell herauskürzen. Dies wird in dieser Dissertation mathematisch untersucht, simulativ modelliert und experimentell verifiziert. Um die Zuverlässigkeit und Kontinuität auch in "gestörten" Umgebungen zu erhöhen, werden in einem Bayesischen Filter die GNSS-Rohdaten mit Inertialsensormessungen aus zwei Fahrzeugen fusioniert. Die Validierung des Sensorfusionsansatzes wurde im Rahmen dieser Dissertation in einem Verkehrs- sowie in einem GNSS-Simulator durchgeführt. Zur experimentellen Untersuchung wurden zwei Testfahrzeuge mit den verschiedenen Sensoren ausgestattet und Messungen in diversen Umgebungen gefahren. In dieser Arbeit wird gezeigt, dass auf Autobahnen, die Relativposition eines anderen Fahrzeugs mit einer Genauigkeit von unter einem Meter kontinuierlich geschätzt werden kann. Eine hohe Zuverlässigkeit in der longitudinalen und lateralen Richtung können erzielt werden und das System erweist 90% der Zeit eine Unsicherheit unter 2.5m. In ländlichen Umgebungen wächst die Unsicherheit in der relativen Position. Mit Hilfe der on-board Sensoren können Fehler bei der Fahrt durch Wälder und Dörfer korrekt gestützt werden. In städtischen Umgebungen werden die Limitierungen des Systems deutlich. Durch die erschwerte Schätzung der Fahrtrichtung des Ego-Fahrzeugs ist vor Allem die longitudinale Komponente der Relativen Position in städtischen Umgebungen stark verfälscht. N2 - Advanced driver assistance systems play an important role in increasing the safety on today's roads. The knowledge about the other vehicles' positions is a fundamental prerequisite for numerous safety critical applications, making it possible to foresee critical situations, warn the driver or autonomously intervene. Forward collision avoidance systems, lane change assistants or adaptive cruise control are examples of safety relevant applications that require an accurate, continuous and reliable relative position of surrounding vehicles. Currently, the positions of surrounding vehicles is estimated by measuring the distance with e.g. radar, laser scanners or camera systems. However, all these techniques have limitations in their perception range, as all of them can only detect objects in their line-of-sight. The limited perception range of today's vehicles can be extended in future by using cooperative approaches based on Vehicle-to-Vehicle (V2V) communication. In this thesis, the capabilities of cooperative relative positioning for vehicles will be assessed in terms of its accuracy, continuity and reliability. A novel approach where Global Navigation Satellite System (GNSS) raw data is exchanged between the vehicles is presented. Vehicles use GNSS pseudorange and Doppler measurements from surrounding vehicles to estimate the relative positioning vector in a cooperative way. In this thesis, this approach is shown to outperform the absolute position subtraction as it is able to effectively cancel out common errors to both GNSS receivers. This is modeled theoretically and demonstrated empirically using simulated signals from a GNSS constellation simulator. In order to cope with GNSS outages and to have a sufficiently good relative position estimate even in strong multipath environments, a sensor fusion approach is proposed. In addition to the GNSS raw data, inertial measurements from speedometers, accelerometers and turn rate sensors from each vehicle are exchanged over V2V communication links. A Bayesian approach is applied to consider the uncertainties inherently to each of the information sources. In a dynamic Bayesian network, the temporal relationship of the relative position estimate is predicted by using relative vehicle movement models. Also real world measurements in highway, rural and urban scenarios are performed in the scope of this work to demonstrate the performance of the cooperative relative positioning approach based on sensor fusion. The results show that the relative position of another vehicle towards the ego vehicle can be estimated with sub-meter accuracy in highway scenarios. Here, good reliability and 90% availability with an uncertainty of less than 2.5m is achieved. In rural environments, drives through forests and towns are correctly bridged with the support of on-board sensors. In an urban environment, the difficult estimation of the ego vehicle heading has a mayor impact in the relative position estimate, yielding large errors in its longitudinal component. KW - Fahrerassistenzsystem Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5411 ER - TY - THES A1 - Fischer, Andreas T1 - An Evaluation Methodology for Virtual Network Embedding N2 - The increasing scale and complexity of computer networks imposes a need for highly flexible management mechanisms. The concept of network virtualization promises to provide this flexibility. Multiple arbitrary virtual networks can be constructed on top of a single substrate network. This allows network operators and service providers to tailor their network topologies to the specific needs of any offered service. However, the assignment of resources proves to be a problem. Each newly defined virtual network must be realized by assigning appropriate physical resources. For a given set of virtual networks, two questions arise: Can all virtual networks be accommodated in the given substrate network? And how should the respective resources be assigned? The underlying problem is commonly known as the Virtual Network Embedding problem. A multitude of algorithms has already been proposed, aiming to provide solutions to that problem under various constraints. For the evaluation of these algorithms typically an empirical approach is adopted, using artificially created random problem instances. However, due to complex effects of random problem generation the obtained results can be hard to interpret correctly. A structured evaluation methodology that can avoid these effects is currently missing. This thesis aims to fill that gap. Based on a thorough understanding of the problem itself, the effects of random problem generation are highlighted. A new simulation architecture is defined, increasing the flexibility for experimentation with embedding algorithms. A novel way of generating embedding problems is presented which migitates the effects of conventional problem generation approaches. An evaluation using these newly defined concepts demonstrates how new insights on algorithm behavior can be gained. The proposed concepts support experimenters in obtaining more precise and tangible evaluation data for embedding algorithms. KW - Virtual Network Embedding KW - Empirical Evaluation KW - Network Virtualization KW - Experimental Algorithmics KW - Virtuelles Netz KW - Virtualisierung KW - Algorithmus KW - Virtuelles Netz KW - Kombinatorische Einbettung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4793 ER - TY - THES A1 - Löwe, Stefan T1 - Effective Approaches to Abstraction Refinement for Automatic Software Verification N2 - This thesis presents various techniques that aim at enabling more effective and more efficient approaches for automatic software verification. After a brief motivation why automatic software verification is getting ever more relevant, we continue with detailing the formalism used in this thesis and on the concepts it is built on. We then describe the design and implementation of the value analysis, an analysis for automatic software verification that tracks state information concretely. From a thorough evaluation based on well over 4 000 verification tasks from the latest edition of the International Competition on Software Verification (SV-COMP), we learn that this plain value analysis leads to an efficient verification process for many verification tasks, but at the same time, fails to solve other verification tasks due to state-space explosion. From this insight we infer that some form of abstraction technique must be added to the value analysis in order to also allow the successful verification of large and complex verification tasks. As a solution, we propose to incorporate counterexample-guided abstraction refinement (CEGAR) and interpolation into the value domain. To this end, we design a novel interpolation procedure, that extracts from infeasible counterexamples interpolants for the value domain, allowing to form a precision strong enough to exclude these infeasible counterexamples, and to make progress in the CEGAR loop. We then describe several optimizations and extensions to these concepts, such that the value analysis with CEGAR becomes competitive for automatic software verification. As the next step, we combine the value analysis with CEGAR with a predicate analysis, to obtain a more precise and efficient composite analysis based on CEGAR. This composite analysis is indeed on a par with the world’s leading software verification tools, as witnessed by the results of SV-COMP’13 where this approach achieved the 2 nd place in the overall ranking. After having available competitive CEGAR-based analyses for the value domain, the predicate domain, and the combination thereof, we then turn our attention to techniques that have the goal to make all these CEGAR-based approaches more successful. Our first novel idea in this regard is based on the concept of infeasible sliced prefixes, which allow the computation of different precisions from a single infeasible counterexample. This adds choice to the CEGAR loop, while without this enhancement, no choice for a specific precision, i. e., a specific refinement, is possible. In our evaluation we show, for both the value analysis and the predicate analysis, that choosing different infeasible sliced prefixes during the refinement step leads to major differences in verification effectiveness and verification efficiency. Extending on the concept of infeasible sliced prefixes, we define several heuristics in order to precisely select a single refinement from a set of possible refinements. We make this new concept, which we refer to as guided refinement selection, available to both the value and predicate analysis, and in a large-scale evaluation we try to answer the question which selection technique leads to well suited abstractions and thus, to a more effective verification process. Additionally, we present the idea of inter-analysis refinement selection, where the refinement component of a composite analysis may decide which of its component analyses is best to be refined, and in yet another evaluation we highlight the positive effects of this technique. Finally, we present the results of SV-COMP’16, where the verifier we contributed and which is based on the concepts and ideas presented in this thesis achieved the 1 st place in the category DeviceDriversLinux64. KW - software verification, model checking, counterexample guided abstraction refinement, CEGAR, interpolation, sliced prefixes, refinement selection, value analysis, predicate analysis, CPAchecker, automatic, automated KW - Programmverifikation Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4815 ER - TY - THES A1 - Petit, Albin T1 - Introducing Privacy in Current Web Search Engines N2 - During the last few years, the technological progress in collecting, storing and processing a large quantity of data for a reasonable cost has raised serious privacy issues. Privacy concerns many areas, but is especially important in frequently used services like search engines (e.g., Google, Bing, Yahoo!). These services allow users to retrieve relevant content on the Internet by exploiting their personal data. In this context, developing solutions to enable users to use these services in a privacy-preserving way is becoming increasingly important. In this thesis, we introduce SimAttack an attack against existing protection mechanism to query search engines in a privacy-preserving way. This attack aims at retrieving the original user query. We show with this attack that three representative state-of-the-art solutions do not protect the user privacy in a satisfactory manner. We therefore develop PEAS a new protection mechanism that better protects the user privacy. This solution leverages two types of protection: hiding the user identity (with a succession of two nodes) and masking users' queries (by combining them with several fake queries). To generate realistic fake queries, PEAS exploits previous queries sent by the users in the system. Finally, we present mechanisms to identify sensitive queries. Our goal is to adapt existing protection mechanisms to protect sensitive queries only, and thus save user resources (e.g., CPU, RAM). We design two modules to identify sensitive queries. By deploying these modules on real protection mechanisms, we establish empirically that they dramatically improve the performance of the protection mechanisms. KW - Privacy KW - Search Engine KW - Unlinkability KW - Indistinguishability KW - Suchmaschine KW - Datensicherung KW - Computersicherheit KW - Privatsphäre Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4652 ER - TY - THES A1 - Joblin, Mitchell T1 - Structural and Evolutionary Analysis of Developer Networks N2 - Large-scale software engineering projects are often distributed among a number sites that are geographically separated by a substantial distance. In globally distributed software projects, time zone issues, language and cultural barriers, and a lack of familiarity among members of different sites all introduce coordination complexity and present significant obstacles to achieving a coordinated effort. For large-scale software engineering projects to satisfy their scheduling and quality goals, many developers must be capable of completing work items in parallel. A key factor to achieving this goal is to remove interdependencies among work items insofar as possible. By applying principles of modularity, work item interdependence can be reduced, but not removed entirely. As a result of uncertainty during the design and implementation phases and incomplete or misunderstood design intents, dependencies between work items inevitably arises and leads to requirements for developers to coordinate. The capacity of a project to satisfy coordination needs depends on how the work items are distributed among developers and how developers are organizationally arranged, among other factors. When coordination requirements fail to be recognized and appropriately managed, anecdotal evidence and prior empirical studies indicate that this condition results in decreased product quality and developer productivity. In essence, properties of the socio-technical environment, comprised of developers and the tasks they must complete, provides important insights concerning the project's capacity to meet product quality and scheduling goals. In this dissertation, we make contributions to support socio-technical analyses of software projects by developing approaches for abstracting and analyzing the technical and social activities of developers. More specifically, we propose a fine-grained, verifiable, and fully automated approach to obtain a proper view on developer coordination, based on commit information and source-code structure, mined from version-control systems. We apply methodology from network analysis and machine learning to identify developer communities automatically. To evaluate our approach, we analyze ten open-source projects with complex and active histories, written in various programming languages. By surveying 53 open-source developers from the ten projects, we validate the accuracy of the extracted developer network and the authenticity of the inferred community structure. Our results indicate that developers of open-source projects form statistically significant community structures and this particular network view largely coincides with developers' perceptions. Equipped with a valid network view on developer coordination, we extend our approach to analyze the evolutionary nature of developer coordination. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. We found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which highly central developers are hierarchically arranged and all other developers are not. Our results suggest that the organizational structure of large software projects is constrained to evolve towards a state that balances the costs and benefits of coordination, and the mechanisms used to achieve this state depend on the project's scale. As a final contribution, we use developer networks to establish a richer understanding of the different roles that developers play in a project. Developers of open-source projects are often classified according to core and peripheral roles. Typically, count-based operationalizations, which rely on simple counts of individual developer activities (e.g., number of commits), are used for this purpose, but there is concern regarding their validity and ability to elicit meaningful insights. To shed light on this issue, we investigate whether count-based operationalizations of developer roles produce consistent results, and we validate them with respect to developers' perceptions by surveying 166 developers. We improve over the state of the art by proposing a relational perspective on developer roles, using our fine-grained developer networks, and by examining developer roles in terms of developers' positions and stability within the developer network. In a study of 10 substantial open-source projects, we found that the primary difference between the count-based and our proposed network-based core--peripheral operationalizations is that the network-based ones agree more with developer perception than count-based ones. Furthermore, we demonstrate that a relational perspective can reveal further meaningful insights, such as that core developers exhibit high positional stability, upper positions in the hierarchy, and high levels of coordination with other core developers, which confirms assumptions of previous work. Overall, our research demonstrates that data stored in software repositories, paired with appropriate analysis approaches, can elicit valuable, practical, and valid insights concerning socio-technical aspects of software development. KW - Software Engineering Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4616 ER - TY - THES A1 - Sipal, Bilge T1 - Border Basis Schemes N2 - The basic idea of border basis theory is to describe a zero-dimensional ring P/I by an order ideal of terms whose residue classes form a K-vector space basis of P/I. The O-border basis scheme is a scheme that parametrizes all zero-dimensional ideals that have an O-border basis. In general, the O-border basis scheme is not an affine space. Subsequently, in [Huib09] it is proved that if an order ideal with "d" elements is defined in a two-dimensional polynomial ring and it is of some special shapes, then the O-border basis scheme is isomorphic to the affine space of dimension 2d. This thesis is dedicated to find a more general condition for an O-border basis scheme to be isomorphic to an affine space of dimension "nd" that is independent of the shape of the order ideal with "d" elements and "n" is the dimension of the polynomial ring that the order ideal is defined in. We accomplish this in 6 Chapters. In Chapters 2 and 3 we develop the concepts and properties of border basis schemes. In Chapter 4 we transfer the smoothness criterion (see [Huib05]) for the point (0,...,0) in a Hilbert scheme of points to the monomial point of the border basis scheme by employing the tools from border basis theory. In Chapter 5 we explain trace and Jacobi identity syzygies of the defining equations of a O-border basis scheme and characterize them by the arrow grading. In Chapter 6 we give a criterion for the isomorphism between 2d dimensional affine space and O-border basis scheme by using the results from Chapters 3 and Chapter 4. The techniques from other chapters are applied in Chapter 6.1 to segment border basis schemes and in Chapter 6.2 to O-border basis schemes for which O is of the sawtooth form. KW - Border Bases, Border Basis Scheme, Monomial point, Cotangent Space, Hilbert Schemes KW - Polynomring KW - Basis (Mathematik) KW - Kommutative Algebra, Randbasen, Randbasen Schema Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4702 ER - TY - THES A1 - Handeck, Jörg T1 - Analyse und Korrektur von Teileprogrammen für numerisch gesteuerte Werkzeugmaschinen N2 - Splinekurven sind oft das erste Mittel der Wahl, wenn Daten interpoliert oder approximiert werden sollen. Sie spielen in vielen praktischen Anwendungsbereichen eine wichtige Rolle und sind in Bereichen des CAD/CAM nicht mehr weg zu denken. In der vorliegenden Arbeit werden in diesem Kontext Bahnpunkte zur Steuerung von Werkzeugmaschinen untersucht. Die Analyse wird mit Hilfe eines Multiresolution (MRA) Ansatzes für Splinekurven mit adaptiven Knotenfolgen realisiert. Dieser MRA Ansatz basiert auf einer Least-Squares-Projektion zum Knotenentfernen und unterscheidet sich somit zu bekannten Ansätzen, die auf orthogonalen Komplementen aufbauen. Des Weiteren wird ein Konzept zur Approximation von Orientierungsdaten mittels homogenen Quaternionensplines vorgestellt. Diese Splines leben auf der Sphäre und lassen sich mittels Knotenentfernen bzw. einfügen verfeinern. Somit lässt sich das vorgestellte MRA–Analyseverfahren ebenfalls auf diese Kurven anwenden. Weiter konnte für diese Kurven eine konvexe Hülle–Eigenschaft nachgewiesen werden. KW - Spline MRA KW - Spline KW - Numerische Steuerung KW - Werkzeugmaschine Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4546 ER - TY - THES A1 - Zwicklbauer, Stefan T1 - Robust Entity Linking in Heterogeneous Domains N2 - Entity Linking is the task of mapping terms in arbitrary documents to entities in a knowledge base by identifying the correct semantic meaning. It is applied in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question and Answering. Most existing Entity Linking systems were optimized for specific domains (e.g., general domain, biomedical domain), knowledge base types (e.g., DBpedia, Wikipedia), or document structures (e.g., tables) and types (e.g., news articles, tweets). This led to very specialized systems that lack robustness and are only applicable for very specific tasks. In this regard, this work focuses on the research and development of a robust Entity Linking system in terms of domains, knowledge base types, and document structures and types. To create a robust Entity Linking system, we first analyze the following three crucial components of an Entity Linking algorithm in terms of robustness criteria: (i) the underlying knowledge base, (ii) the entity relatedness measure, and (iii) the textual context matching technique. Based on the analyzed components, our scientific contributions are three-fold. First, we show that a federated approach leveraging knowledge from various knowledge base types can significantly improve robustness in Entity Linking systems. Second, we propose a new state-of-the-art, robust entity relatedness measure for topical coherence computation based on semantic entity embeddings. Third, we present the neural-network-based approach Doc2Vec as a textual context matching technique for robust Entity Linking. Based on our previous findings and outcomes, our main contribution in this work is DoSeR (Disambiguation of Semantic Resources). DoSeR is a robust, knowledge-base-agnostic Entity Linking framework that extracts relevant entity information from multiple knowledge bases in a fully automatic way. The integrated algorithm represents a collective, graph-based approach that utilizes semantic entity and document embeddings for entity relatedness and textual context matching computation. Our evaluation shows, that DoSeR achieves state-of-the-art results over a wide range of different document structures (e.g., tables), document types (e.g., news documents) and domains (e.g., general domain, biomedical domain). In this context, DoSeR outperforms all other (publicly available) Entity Linking algorithms on most data sets. KW - Entity Linking KW - Neural Networks KW - Knowledge Bases KW - Linked Data KW - Wissensbasiertes System Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5047 ER - TY - THES A1 - Wendler, Philipp T1 - Towards Practical Predicate Analysis N2 - Software model checking is a successful technique for automated program verification. Several of the most widely used approaches for software model checking are based on solving first-order-logic formulas over predicates using SMT solvers, e.g., predicate abstraction, bounded model checking, k-induction, and lazy abstraction with interpolants. We define a configurable framework for predicate-based analyses that allows expressing each of these approaches. This unifying framework highlights the differences between the approaches, producing new insights, and facilitates research of further algorithms and their combinations, as witnessed by several research projects that have been conducted on top of this framework. In addition to this theoretical contribution, we provide a mature implementation of our framework in the software verifier that allows applying all of the mentioned approaches to practice. This implementation is used by other research groups, e.g., to find bugs in the Linux kernel, and has proven its competitiveness by winning gold medals in the International Competition on Software Verification. Tools and approaches for software model checking like our predicate analysis are typically evaluated using performance benchmarking on large sets of verification tasks. We have identified several pitfalls that can silently arise during benchmarking, and we have found that the benchmarking techniques and tools that are used by many researchers do not guarantee valid results in practice, but may produce arbitrarily large measurement errors. Furthermore, certain hardware characteristics can also have nondeterministic influence on the measurements. In order to being able to properly evaluate our framework for software verification, we study the effects of these hardware characteristics, and define a list of the most important requirements that need to be ensured for reliable benchmarking. We present as solution an open-source benchmarking framework BenchExec, which in contrast to other benchmarking tools fulfills all our requirements and aims at making reliable benchmarking easy. BenchExec was already adopted by several research groups and the International Competition on Software Verification. Using the power of BenchExec we conduct an experimental evaluation of our unifying framework for predicate analysis. We study the effect of varying the SMT solver and the way program semantics are encoded in formulas across several verification algorithms and find that these technical choices can significantly influence the results of experimental studies of verification approaches. This is valuable information for both researchers who study verification approaches as well as for users who apply them in practice. Our comprehensive study of 120 different configurations would not have been possible without our highly flexible and configurable unifying framework for predicate analysis and shows that the latter is a valuable base for conducting experiments. Furthermore, we show using a comparison against top-ranking verifiers from the International Competition on Software Verification that our implementation is highly competitive and can outperform the state of the art. KW - Programmverifikation Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5098 ER - TY - THES A1 - He, Xiaobing T1 - Threat Assessment for Multistage Cyber Attacks in Smart Grid Communication Networks N2 - In smart grids, managing and controlling power operations are supported by information and communication technology (ICT) and supervisory control and data acquisition (SCADA) systems. The increasing adoption of new ICT assets in smart grids is making smart grids vulnerable to cyber threats, as well as raising numerous concerns about the adequacy of current security approaches. As a single act of penetration is often not sufficient for an attacker to achieve his/her goal, multistage cyber attacks may occur. Due to the interdependence between the power grid and the communication network, a multistage cyber attack not only affects the cyber system but impacts the physical system. This thesis investigates an application-oriented stochastic game-theoretic cyber threat assessment framework, which is strongly related to the information security risk management process as standardized in ISO/IEC 27005. The proposed cyber threat assessment framework seeks to address the specific challenges (e.g., dynamic changing attack scenarios and understanding cascading effects) when performing threat assessments for multistage cyber attacks in smart grid communication networks. The thesis looks at the stochastic and dynamic nature of multistage cyber attacks in smart grid use cases and develops a stochastic game-theoretic model to capture the interactions of the attacker and the defender in multistage attack scenarios. To provide a flexible and practical payoff formulation for the designed stochastic game-theoretic model, this thesis presents a mathematical analysis of cascading failure propagation (including both interdependency cascading failure propagation and node overloading cascading failure propagation) in smart grids. In addition, the thesis quantifies the characterizations of disruptive effects of cyber attacks on physical power grids. Furthermore, this thesis discusses, in detail, the ingredients of the developed stochastic game-theoretic model and presents the implementation steps of the investigated stochastic game-theoretic cyber threat assessment framework. An application of the proposed cyber threat assessment framework for evaluating a demonstrated multistage cyber attack scenario in smart grids is shown. The cyber threat assessment framework can be integrated into an existing risk management process, such as ISO 27000, or applied as a standalone threat assessment process in smart grid use cases. KW - Smart Grids KW - Game Theory KW - Cascading Failures KW - Threat Assessment KW - Communication Networks KW - Intelligentes Stromnetz KW - Sicherheit KW - Spieltheorie Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5051 ER - TY - THES A1 - Ruppert, Julia T1 - Asymptotic Expansion for the Time Evolution of the Probability Distribution Given by the Brownian Motion on Semialgebraic Sets N2 - In this thesis, we examine whether the probability distribution given by the Brownian Motion on a semialgebraic set is definable in an o-minimal structure and we establish asymptotic expansions for the time evolution. We study the probability distribution as an example for the occurrence of special parameterized integrals of a globally subanalytic function and the exponential function of a globally subanalytic function. This work is motivated by the work of Comte, Lion and Rolin, which considered parameterized integrals of globally subanalytic functions, of Cluckers and Miller, which examined parameterized integrals of constructible functions, and by the work of Cluckers, Comte, Miller, Rolin and Servi, which treated oscillatory integrals of globally subanalytic functions. In the one dimensional case we show that the probability distribution on a family of sets, which are definable in an o-minimal structure, are definable in the Pfaffian closure. In the two-dimensional case we investigate asymptotic expansions for the time evolution. As time t approaches zero, we show that the integrals behave like a Puiseux series, which is not necessarily convergent. As t tends towards infinity, we show that the probability distribution is definable in the expansion of the real ordered field by all restricted analytic functions if the semialgebraic set is bounded. For this purpose, we apply results for parameterized integrals of globally subanalytic functions of Lion and Rolin. By establishing the asymptotic expansion of the integrals over an unbounded set, we demonstrate that this expansion has the form of convergent Puiseux series with negative exponents and their logarithm. Subsequently, we get that the asymptotic expansion is definable in an o-minimal structure. Finally, we study the three-dimensional case and give the proof that the probability distribution given by the Brownian Motion behaves like a Puiseux series as time t tends towards zero. As t approaches infinity and the semialgebraic set is bounded, it can be ascertained that the probability distribution has the form of a constructible function by results of Cluckers and Miller and therefore it is definable in an o-minimal structure. If the semialgebraic set is unbounded, we establish the asymptotic expansions and prove that the probability distribution given by the Brownian Motion on unbounded sets has an asymptotic expansion of the form of a constructible function. In consequence of that, the asymptotic expansion is definable in an o-minimal structure. KW - o-minimality KW - asymptotic expansions KW - globally subanalytic sets KW - exponential parameterized integrals KW - Brownian Motion KW - Brownsche Bewegung KW - O-Minimalität KW - Asymptotische Entwicklung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5069 ER - TY - THES A1 - Klaghstan, Merza T1 - Multimedia data dissemination in opportunistic systems N2 - Opportunistic networks (OppNets) are human-centric mobile ad-hoc networks, in which neither the topology nor the participating nodes are known in advance. Routing is dynamically planned following the store-carry-and-forward paradigm, which takes advantage of people mobility. This widens the range of communication and supports indirect end-to-end data delivery. But due to individuals’ mobility, OppNets are characterized by frequent communication disruptions and uncertain data delivery. Hence, these networks are mostly used for exchanging small messages like disaster alarms or traffic notifications. Other scenarios that require the exchange of larger data (e.g. video) are still challenging due to the characteristics of this kind of networks. However, there are still multimedia sharing scenarios where a user might need switching from infrastructural communications to an ad-hoc alternative. Examples are the cases of 1) absence of infrastructural networks in far rural areas, 2) high costs due to roaming or limited data volumes or 3) undesirable censorship by third parties while exchanging sensitive content. Consequently, we target in this thesis a video dissemination scheme in OppNets. For the video delivery problem in the sparse opportunistic networks, we propose a solution with the objective of reducing the video playout delay, so that enabling the recipient to play the video content as soon as possible even if at a low quality. Furthermore, the received video reaches later a higher quality level, ensuring a better viewing experience. The proposed solution encloses three contributions. The first one is given by granulating the videos at the source node into smaller parts, and associating them with unequal redundancy degrees. This is technically based on using the Scalable Video Coding (SVC), which encodes a video into several layers of unequal importance for viewing the content at different quality levels. Layers are routed using the Spray-and-Wait routing protocol, with different redundancy factors for the different layers depending on their importance degree. In this context as well, a video viewing QoE metric is proposed, which takes the values of the perceived video quality, delivery delay and network overhead into consideration, and on a scalable basis. Second, we take advantage of the small units of the Network Abstraction Layer (NAL), which compose SVC layers. NAL units are packetized together under specific size constraints to optimize granularity. Packets sizes are tuned in an adaptive way, with regard to the dynamic network conditions. Each node is enabled to record a history of environmental information regarding the contacts and forwarding opportunities, and use this history to predict future opportunities and optimize the sizes accordingly. Lastly, the receiver (destination) node is pushed into action by reacting to missing data parts in a composite ``backward'' loss concealment mechanism. So, the receiver asks first for the missing data from other nodes in the network in the form of request-response. Then, since the transmission is concerned with video content, video frame loss error concealment techniques are also exploited at the receiver side. Consequently, we propose to combine the two techniques in the loss concealment mechanism, which is enabled then to react to missing data parts. To study the feasibility and the applicability of the proposed solutions, simulation-driven experiments are performed, and statistical results are collected and analyzed. Consequently, we have got promising results that show the applicability of video dissemination in opportunistic delay tolerant networks, and open the door for a range of possible future works. KW - Opportunistisches Netzwerk KW - Multimedia KW - Videoübertragung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4438 ER - TY - THES A1 - Kinseher, Josef T1 - New Methods for Improving Embedded Memory Manufacturing Tests N2 - Due to the need for fast and energy-efficient accesses to growing amounts of data, the share and number of embedded memories inside modern microchips has been continuously increasing within the last years. Since embedded memories have the highest integration density of a fabrication technology they pose special test challenges due to complex manufacturing defects as well as strong transistor aging phenomena. This necessitates efficient methods for detecting more subtle defects while keeping test costs low. This work presents novel methods and techniques for improving the efficiency of embedded memory manufacturing tests. The proposed methods are demonstrated in an industrial setting based on production-proven transistor, memory as well as chip models and their benefits over the current state-of-the art is worked out. KW - Speicher KW - Chip KW - Eingebettetes System Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6017 ER - TY - THES A1 - Gütschow, Silja T1 - Roulets - Eine Integraltransformation zur Bestimmung gerichteter Singularitäten N2 - In dieser Arbeit wird eine neue Integraltransformation, die Roulettransformation, eingeführt. Diese arbeitet mit anisotropen Skalierungen und Rotationen. Es wird gezeigt, dass die Roulettransformation allgemeine gerichtete Singularitäten im Sinne von temperierten Distributionen auflöst. Die Abklingraten an Punkt- sowie Liniensingularitäten werden explizit angegeben. KW - Integraltransformation KW - Singularität Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5929 ER - TY - THES A1 - Tomashevich, Victor T1 - Fault Tolerance Aspects of Virtual Massive MIMO Systems N2 - Employment of a very large number of antennas is seen as the key technology to provide future users with very high data rates. At the same time, the implementation complexity will rise due to large memories required and sophisticated signal processing algorithms employed. Continuous technology downscaling allows implementation of such complex digital designs. At the same time, its inherent variability and vulnerability to physical disturbances violate the assumption of perfectly reliable hardware operation. This work considers Unique Word OFDM which represents the alternative to the standard Cyclic Prefix OFDM providing superior detection quality. The generalization of Unique Word OFDM to a MIMO system is performed which allows interpretation as a virtual massive MIMO system with only few physical antennas. Detection methods for the introduced generalization are discussed and their performance is quantified. Because of the large memory size required, linear detection represents the cost and performance effective solution. The possible memory errors due to radiation effects or voltage scaling are addressed and the nonlinear MMSE detection algorithm is proposed. This algorithm keeps track of the memory errors and is able to significantly mitigate their effect on the quality of the estimated data. Apart of memory issues, reliability of the actual computational hardware which constitutes the receiver is of concern in this work. An own implementation of the MMSE Sorted Givens Rotations is subjected to transient fault injection. The impact of faults in various parts of the implemented circuit on the detection performance is quantified. Most vulnerable components of the implemented circuit in terms of reliability are identified. Security is another major address of this work, since most current implementations include cryptographic devices. Fault-based attacks on such systems are known to be able to extract the secret key in feasible time. The remaining part of this work addresses such fault injection-based malicious attacks. Countermeasures based on a combination of information and hardware redundancy are considered. Recently introduced robust codes target such attacks by providing guaranteed detection capability. The performance of these codes is assessed by application to actual cryptographic and general purpose circuits. The work introduces metrics that help to identify fault locations in the circuit which could escape detection with high probability. These locations are targeted by transistor resizing that renders fault injection unfeasible. KW - MIMO Systems KW - MIMO KW - Fehlertoleranz Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4047 ER - TY - THES A1 - Jiang, Jie T1 - Delay Testing in Nanoscale Technology under Process Variations N2 - In modern CMOS technology, process variations have significantly increased impact on the circuit behavior with continuously scaled transistor sizes. Manufactured devices tend to have different performances due to parameter variations during manufacturing and in the operating context. Conventional tests generated regardless of variations could fail to rule out devices with low performance and even functional failure caused by extreme variations; the unreliability in shipped products is in turn raised. To tackle the problem, many existing test approaches have focused on identifying and testing a number of critical paths in the circuit, and aimed at the efficiency of the searching process. However, the statistical circuit model, which better describes the circuit timing behavior under variations, is not yet sufficiently investigated and employed by existing testing methodologies. This thesis work proposes Opt-KLPG and MIRID, which can be utilized by a statistical delay testing flow. Opt-KLPG—a K Longest Paths Generation (KLPG) algorithm for optimal solutions under memory constraints—can pin-pointedly generate tests for small delay defects, which are common small timing deviations under process variations, based on the traditional KLPG algorithm. In contrast to KLPG, Opt-KLPG guarantees the optimality of the solution (the K longest sensitizable paths indeed). MIRID is a mixed-mode timing-aware simulator, incorporating effects of power-supply noise and combining an event-driven logic simulation engine with interfaces to provided electrical models. MIRID aims at evaluating delay tests in presence of process variations efficiently yet accurately, by performing logic simulation at the gate level while determining the gate delays using simplified electrical modes. The electrical models applied by the simulator focus on the IR drop effect. Electrical parameters mainly contributing to the effect are incorporated into the model. The simulator is generic and flexible to be adapted by modifying the interfaces with minor effort. Both applications were verified in various aspects by experiments for academical/industrial circuits, and turned out to have satisfiable effectiveness and performance. KW - Digital CMOS IC KW - Test KW - Power Noise KW - Simulation KW - Parameter variations KW - Small-delay testing KW - K longest path generation KW - CMOS Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4229 ER - TY - THES A1 - Alsarem, Mazen T1 - Semantic Snippets via Query-Biased Ranking of Linked Data Entities N2 - In our knowledge-driven society, the acquisition and the transfer of knowledge play a principal role. Web search engines are somehow tools for knowledge acquisition and transfer from the web to the user. The search engine results page (SERP) consists mainly of a list of links and snippets (excerpts from the results). The snippets are used to express, as efficiently as possible, the way a web page may be relevant to the query. As an extension of the existing web, the semantic web or “web 3.0” is designed to convert the presently available web of unstructured documents into a web of data consumable by both human and machines. The resulting web of data and the current web of documents coexist and interconnect via multiple mechanisms, such as the embedded structured data, or the automatic annotation. In this thesis, we introduce a new interactive artifact for the SERP: the “Semantic Snippet”. Semantic Snippets rely on the coexistence of the two webs to facilitate the transfer of knowledge to the user thanks to a semantic contextualization of the user’s information need. It makes apparent the relationships between the information need and the most relevant entities present in the web page. The generation of semantic snippets is mainly based on the automatic annotation of the LOD1’s entities in web pages. The annotated entities have different level of impor- tance, usefulness and relevance. Even with state of the art solutions for the automatic annotations of LOD entities within web pages, there is still a lot of noise in the form of erroneous or off-topic annotations. Therefore, we propose a query-biased algorithm (LDRANK) for the ranking of these entities. LDRANK adopts a strategy based on the linear consensual combination of several sources of prior knowledge (any form of con- textual knowledge, like the textual descriptions for the nodes of the graph) to modify a PageRank-like algorithm. For generating semantic snippets, we use LDRANK to find the more relevant entities in the web page. Then, we use a supervised learning algorithm to link each selected entity to excerpts from the web page that highlight the relationship between the entity and the original information need. In order to evaluate our semantic snippets, we integrate them in ENsEN (Enhanced Search Engine), a software system that enhances the SERP with semantic snippets. Finally, we use crowdsourcing to evaluate the usefulness and the efficiency of ENsEN. N2 - In unserer heutigen Wissensgesellschaft spielen der Erwerb und die Weitergabe von Wissen eine zentrale Rolle. Internetsuchmaschinen fungieren als Werkzeuge für den Erwerb und die Weitergabe von Wissen aus dem Web an den Nutzer. Die Ergebnisliste einer Suchmaschine (SERP) besteht grundsätzlich aus einer Liste von Links und Textauszügen (Snippets). Diese Snippets sollen auf möglichst effiziente Weise ausdrücken inwiefern eine Webseite für die Suchanfrage relevant ist. Als Erweiterung des bestehenden Internets, überführt das semantische Web - auch genannt “Web 3.0” - das momentan vorhandene Internet der unstrukturierten Dokumente in ein Internet der Daten, das sowohl von Menschen als auch Maschinen verwendet werden kann. Das neu geschaffene Internet der Daten und das derzeitige Internet der Dokumente existieren gleichzeitig und sie sind über eine Vielzahl von Mechanismen miteinander verbunden, wie beispielsweise über eingebettete strukturierte Daten oder eine automatische Annotation. In dieser Arbeit stellen wir ein neues interaktives Artefakt für das SERP vor: Das “Semantische Snippet”. Semantische Snippets stützen sich auf die Koexistenz der beiden Arten des Internets um mit Hilfe der Kontextualisierung des Informationsbedürfnisses eines Nutzers die Weitergabe von Wissen zu erleichtern. Sie stellen die Verbindung zwischen dem Informationsbedürfnis und den besonders relevanten Entitäten einer Webseite heraus. Die Erzeugung semantischer Snippets basiert überwiegend auf der automatisierten Annotation von Webseiten mit Entitäten aus der Linking Open Data Cloud (LOD). Die annotierten Entitäten besitzen unterschiedliche Ebenen hinsichtlich Wichtigkeit, Nützlichkeit und Relevanz. Selbst bei state-of-the-art Lösungen zur automatisierten Annotation von LOD- Entitäten in Webseiten, gibt es stets ein großes Maß an Rauschen in Form von fehlerhaften oder themenfremden Annotationen. Wir stellen deshalb einen anfragegetriebenen Algorithmus (LDRANK) für das Ranking dieser Entitäten vor. LDRANK setzt eine Strategie ein, die auf der linearen Konsensus-Kombination (engl. linear consensual combination) mehrerer a-priori Wissensquellen (jedwede Art von Kontextwissen, wie beispielsweise die textuelle Beschreibung der Knoten des Graphen) basiert um damit den PageRank-Algorithmus zu modifizieren. Zur Generierung semantischer Snippets finden wir zunächst mit Hilfe von LDRANK die relevantesten Entitäten in einer Webseite. Anschließend verwenden wir ein überwachtes Lernverfahren um jede ausgewählte Entität denjenigen Abschnitten der Webseite zuzuordnen, die die Beziehung zwischen der Entität und dem ursprünglichen Informationsbedarf am besten herausstellt. Um unsere semantischen Snippets zu evaluieren, integrieren wir sie in ENsEN (Enhanced Search Engine), ein Softwaresystem das SERP um semantische Snippets erweitert. Zum Abschluss bewerten wir die Nu ̈tzlichkeit und die Effizienz von ENsEN mittels Crowdsourcing. KW - Semantic Snippets KW - Entity Ranking KW - Web of Data KW - World Wide Web 3.0 KW - Suchmaschine KW - Suchmaschinenoptimierung KW - Ranking Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3959 ER - TY - THES A1 - Berndl, Emanuel T1 - Embedding a Multimedia Metadata Model into a Workflow-driven Environment Using Idiomatic Semantic Web Technologies N2 - The Semantic Web exists for about 20 years by now, but its applicability as well as its presence does not live up to the standards of its original idea. Incorporated Semantic Web Technologies do have an initial barrier to learn and apply, which can discourage many potential users. This leads to less available data overall in addition to decreased data quality. This work solves parts of the aforementioned problem by supporting idiomatic entry to those Semantic Web Technologies, allowing for "easier" accessibility and usability. Anno4j is a Java library that implements a form of Object-Relational Mapping for RDF data. With its application, RDF data can be created via a mapping by simply instantiating Java objects - an object-oriented programming concept the user is familiar with. On the other side, requesting persisted data is supported by a path-based querying possibility, while other features like transactional behaviour, code generation, and automated validation of input contribute to a more effective, comprehensive, and straightforward usage. A use-case is provided by the MICO Platform, a centralized software instance that connects autonomous multimedia extractors in a workflow-driven fashion. This leads to a rich metadata background for the inserted multimedia files, enabling them to be used in diverse scenarios as well as unlocking yet hidden semantics. For this task it was necessary to design and implement a metadata model that is able to aggregate and merge the varying extractor results under a common denominator: the MICO Metadata Model. The results of this work allow the use case to incorporate idiomatic Semantic Web Technologies which are then usable natively by non-Semantic Web experts. Additionally, an increase has been achieved in forms of data integration, synchronisation, integrity and validity, as well as an overall more comprehensive and rich implementation of the multimedia extractors. KW - Semantic Web KW - Multimedia KW - Workflows KW - Multimedia KW - Metadaten KW - RDF KW - Semantic Web Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6708 ER - TY - THES A1 - Awwad, Tarek T1 - Context-Aware Worker Selection For Efficient Quality Control In Crowdsourcing N2 - In the last decade, crowdsourcing has proved its ability to address large scale data collection tasks, such as labeling large data sets, at a low cost and in a short time. However, the performance and behavior variability between workers as well as the variability in task designs and contents, induce an unevenness in the quality of the produced contributions and, thus, in the final output quality. In order to maintain the effectiveness of crowdsourcing, it is crucial to control the quality of the contributions. Furthermore, maintaining the efficiency of crowdsourcing requires the time and cost overhead related to the quality control to be at its lowest. While effective, current quality control techniques such as contribution aggregation, worker selection, context-specific reputation systems, and multi-step workflows, suffer from fairly high time and budget overheads and from their dependency on prior knowledge about individual workers. In this thesis, we address this challenge by leveraging the similarity between completed and incoming tasks as well as the correlation between the worker declarative profiles and their performance in previous tasks in order to perform an efficient task-aware worker selection. To this end, we propose CAWS (Context AwareWorker Selection) method which operates in two phases; in an offline phase, completed tasks are clustered into homogeneous groups for each of which the correlation with the workers declarative profile is learned. Then, in the online phase, incoming tasks are matched to one of the existing clusters and the correspondent, previously inferred profile model is used to select the most reliable online workers for the given task. Using declarative profiles helps eliminate any probing process, which reduces the time and the budget while maintaining the crowdsourcing quality. Furthermore, the set of completed tasks, when compared to a probing task split, provides a larger corpus from which a more precise profile model can be learned. This translates to a better selection quality, especially for harder tasks. In order to evaluate CAWS, we introduce CrowdED (Crowdsourcing Evaluation Dataset), a rich dataset to evaluate quality control methods and quality-driven task vectorization and clustering. The generation of CrowdED relies on a constrained sampling approach that allows to produce a task corpus which respects both, the budget and type constraints. Beside helping in evaluating CAWS, and through its generality and richness, CrowdED helps in plugging the benchmarking gap present in the crowdsourcing quality control community. Using CrowdED, we evaluate the performance of CAWS in terms of the quality of the worker selection and in terms of the achieved time and budget reduction. Results shows the following: first, automatic grouping is able to achieve a learning quality similar to job-based grouping. And second, CAWS is able to outperform the state-of-the-art profile-based worker selection when it comes to quality. This is especially true when strong budget and time constraints are present on the requester side. Finally, we complement our work by a software contribution consisting of an open source framework called CREX (CReate Enrich eXtend). CREX allows the creation, the extension and the enrichment of crowdsourcing datasets. It provides the tools to vectorize, cluster and sample a task corpus to produce constrained task sets and to automatically generate custom crowdsourcing campaign sites. N2 - Im letzten Jahrzehnt hat Crowdsourcing seine Fähigkeit bewiesen große Datensammelaufgaben, wie die Beschriftung großer Datensätze, zu geringen Kosten und in kurzer Zeit zu bewältigen. Die Leistungs- und Verhaltensschwankungen zwischen den Arbeitern sowie die Variabilität in den Aufgabenentwürfen und -inhalten führen jedoch zu einer Ungleichmäßigkeit in der Qualität der erworbenen Beiträge und somit in der endgültigen Ausgabequalität. Um die Effektivität von Crowdsourcing zu erhalten, ist es entscheidend die Qualität der einzelnen Beiträge zu kontrollieren. Darüber hinaus erfordert die Aufrechterhaltung der Effizienz von Crowdsourcing, dass der Zeit- und Kostenaufwand für die Qualitätskontrolle am geringsten ist. Effektive, aktuelle Qualitätskontrolltechniken wie die Aggregation von Beiträgen, die gezielte Auswahl von Arbeitern, kontextspezifische Reputationssysteme und mehrstufige Workflows leiden unter ziemlich hohen Zeit- und Budgetzwangslagen und von ihrer Abhängigkeit von vorausgehenden Kenntnissen über die einzelnen Arbeiter. Ìn dieser Arbeit gehen wir diese Herausforderungen an, indem wir die Ähnlichkeit zwischen abgeschlossenen und eingehenden Aufgaben sowie die Korrelation zwischen den von Arbeitern deklarierten Profilen und deren Leistung in früheren Aufgaben nutzen, um eine effiziente aufgabenbewusste Arbeiterauswahl durchzuführen. Zu diesem Zweck schlagen wir eine zweiphasige Methode vor: CAWS (Context Aware Worker Selection). In einer Offline-Phase werden bereits bearbeitete Aufgaben in homogene Cluster gruppiert, für welche jeweils die Korrelation mit dem vorab deklarierten Profil der Arbeiter erlernt wird. In der Online-Phase werden eingehende Aufgaben dann einem der vorhandenen Cluster zugeordnet, und das entsprechende, zuvor erschlossene Profilmodell wird dazu verwendet, um die vertrauenswürdigsten Online-Mitarbeiter für die gegebene Aufgabe auszuwählen. Die Verwendung von deklarativen Profilen hilft dabei jeglichen Sondierungsprozess zu eliminieren, wobei Zeit und Kosten reduziert werden und gleichzeitig die Crowdsourcing-Qualität beibehalten wird. Darüber hinaus bietet das Aggregat der abgeschlossenen Aufgaben im Vergleich zu einer Aufgabenaufteilung durch Sondierung einen größeren Korpus, aus dem ein präziseres Profilmodell erlernt werden kann. Dies führt zu einer besseren Auswahlqualität, insbesondere für schwierigere Aufgaben. Um CAWS zu evaluieren, stellen wir CrowdED (Crowdsourcing Evaluation Dataset) vor, einen umfassenden Datensatz zur Evaluierung von Qualitätskontrollmethoden und qualitätsgetriebener Aufgaben-Vektorisierung und Clusterbildung. Die Generierung von CrowdED basiert auf einem bedingten Stichprobeverfahren, welches es ermöglicht, einen Aufgaben-Corpus zu erstellen, der sowohl die Budget- als auch die Typ-Bedingungen einhält. Neben seiner Allgemeingültigkeit und Reichhaltigkeit, hilft CrowdED nicht nur bei der Bewertung von CAWS, sondern es hilft auch dabei, die Benchmarking-Lücke in der Crowdsourcing-Community für Qualitätskontrolle zu schließen. Mit CrowdED evaluieren wir die Leistung von CAWS im Hinblick auf die Qualität der Arbeiterauswahl und auf die erreichte Zeit- und Kostenreduzierung. Die Ergebnisse zeigen folgendes: Zum einen kann mit der automatischen Gruppierung eine Lernqualität ähnlich der von Job-basierten Gruppierungen erreicht werden. Und zweitens ist CAWS in der Lage, die aktuellen profilbasierten Auswahlmethoden in Bezug auf Qualität zu übertreffen. Dies gilt insbesondere dann, wenn auf der Anfordererseite starke Budget- und Zeitbeschränkungen bestehen. Schließlich ergänzen wir unsere Arbeit mit einer Software, die aus einem lizenzfreien Framework namens CREX (CReate Enrich eXtend) besteht. CREX ermöglicht die Erstellung, Erweiterung und Anreicherung von Crowdsourcing-Datensätzen. Es liefert die nötigen Werkzeuge um einen Aufgabenkorpus zu vektorisieren, zu gruppieren und zu samplen, um eingeschränkte Aufgabensätze zu erzeugen und um automatisch benutzerdefinierte Crowdsourcing-Kampagnen-Seiten zu generieren. KW - Crowdsourcing KW - Quality control KW - Machine learning KW - Qualitätssicherung KW - Open Innovation Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7409 ER - TY - THES A1 - Kolesnikov, Sergiy T1 - Feature Interactions in Configurable Software Systems N2 - Software has become an important part of our life. Therefore, the number of different applications scenarios and user requirements of software systems grows rapidly. To satisfy these requirements, software vendors build configurable software systems that can be tailored to diverse needs without rebuilding them from scratch, which reduces costs and development time. Despite considerable advances in software engineering, which allow building high-quality configurable software systems, some challenges remain. One of these challenges is the feature interaction problem that arises when parts (features), from which a configurable system is composed, interact in unexpected ways, and inadvertently change the behavior or quality attributes (such as performance) of the system. The goal of this dissertation is to systematically study the nature of feature interactions, their causes, their influence on performance of configurable systems, and, based on empirical results, suggest ways of improving techniques for detecting and predicting feature interactions. More specifically, we compared and evaluated different strategies for the analysis of configurable software systems. The results of our evaluation complement empirical data from previous work about how different analysis strategies for configurable software systems compare with respect to different aspects, such as performance. These results shall be used to develop effective and scalable techniques and tools for analysis of configurable software including feature-interaction detection and prediction techniques and tools. Technically, we used a machine-learning technique to quantify the influence of feature interactions on performance of real-world configurable systems. We studied the characteristics of interactions that have the largest influence on performance and found that interactions among few features have higher influence than interactions among many features. With a growing number of interacting features, the influence of the corresponding interactions decreases consistently. This implies that interactions involving multiple features can be ignored in practice because of their marginal influence on performance. We also investigated the causes of the interactions and were able to identify several patterns that link these interactions to the architecture of the systems: For example, we found that if a data processing system consisted of multiple features that processed the same data in sequence then these features interacted. The identified patterns can help to anticipate performance interactions already at an early development stage when a system’s architecture is designed. Furthermore, considering that control-flow interactions (observable at the level of control flow among features) are easier to detect than performance interactions (externally observable through measuring performance of different combinations of features), we conducted a case study on two configurable systems. In this case study, we investigated a possible relation among control-flow feature interactions and performance feature interactions. We also discussed how this relation can be exploited by interaction detection and performance prediction techniques to make them more time efficient and precise. Our case study on two real-world configurable systems revealed that a relation indeed exists, and we were able to show how it can be used to reduce the search space of possibly existing performance interactions. The study can serve as a blueprint for further studies that can rely on our conceptual framework for investigating relations among external and internal interactions. Overall, the contribution of this dissertation consists of scientific and technical insights, practical tool implementations, empirical evaluations, and case studies that advance the current state of research in the area of feature interactions in configurable software systems. In particular, we provide insights into the causes of feature interactions and their influence on performance of real-world configurable systems (e.g., interaction patterns, decreasing influence of interactions with growing number of involved features). Our results also suggest ways of improving techniques for detecting and predicting feature interactions (e.g., ignoring interactions among multiple features, reducing the search space based on relations among interactions). KW - Configurable software system KW - Feature interaction KW - Performance influence model KW - Software product line KW - Variability-aware software analysis strategy KW - Softwareentwicklung KW - Qualitätssicherung Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6739 ER - TY - CHAP A1 - Berger, Christian A1 - Reiser, Hans P. A1 - Sousa, João A1 - Bessani, Alysson T1 - Resilient Wide-Area Byzantine Consensus Using Adaptive Weighted Replication T2 - 38th IEEE International Symposium on Reliable Distributed Systems (SRDS 2019) N2 - In geo-replicated systems, the heterogeneous latencies of connections between replicas limit the system’s ability to achieve fast consensus. State machine replication (SMR) protocols can be refined for their deployment in wide-area networks by using a weighting scheme for active replication that employs additional replicas and assigns higher voting power to faster replicas. Utilizing more variability in quorum formation allows replicas to swifter proceed to subsequent protocol stages, thus decreasing consensus latency. However, if network conditions vary during the system’s lifespan or faults occur, the system needs a solution to autonomously adjust to new conditions. We incorporate the idea of self-optimization into geographically distributed, weighted replication by introducing AWARE, an automated and dynamic voting weight tuning and leader positioning scheme. AWARE measures replica-replica latencies and uses a prediction model, thriving to minimize the system’s consensus latency. In experiments using different Amazon EC2 regions, AWARE dynamically optimizes consensus latency by self-reliantly finding a fast weight configuration yielding latency gains observed by clients located across the globe. KW - adaptivness, weighted replication, consensus, geo-replication, Byzantine fault tolerance, self-optimization Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7537 PB - IEEE Xplore ER - TY - CHAP A1 - Berger, Christian A1 - Reiser, Hans P. T1 - Scaling Byzantine Consensus: A Broad Analysis T2 - SERIAL'18 Proceedings of the 2nd Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers N2 - Blockchains and distributed ledger technology (DLT) that rely on Proof-of-Work (PoW) typically show limited performance. Several recent approaches incorporate Byzantine fault-tolerant (BFT) consensus protocols in their DLT design as Byzantine consensus allows for increased performance and energy efficiency, as well as it offers proven liveness and safety properties. While there has been a broad variety of research on BFT consensus protocols over the last decades, those protocols were originally not intended to scale for a large number of nodes. Thus, the quest for scalable BFT consensus was initiated with the emerging research interest in DLT. In this paper, we first provide a broad analysis of various optimization techniques and approaches used in recent protocols to scale Byzantine consensus for large environments such as BFT blockchain infrastructures. We then present an overview of both efforts and assumptions made by existing protocols and compare their solutions. KW - Distributed Ledgers KW - Blockchain KW - Byzantine Fault-Tolerant Consensus Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7526 SN - 978-1-4503-6110-1 PB - ACM CY - New York, NY, USA ER - TY - THES A1 - Kell, Christian T1 - A Structure-based Attack on the Linearized Braid Group-based Diffie-Hellman Conjugacy Problem in Combination with an Attack using Polynomial Interpolation and the Chinese Remainder Theorem N2 - This doctoral thesis is dedicated to improve a linear algebra attack on the so-called braid group-based Diffie-Hellman conjugacy problem (BDHCP). The general procedure of the attack is to transform a BDHCP to the problem of solving several simultaneous matrix equations. A first improvement is achieved by reducing the solution space of the matrix equations to matrices that have a specific structure, which we call here the left braid structure. Using the left braid structure the number of matrix equations to be solved reduces to one. Based on the left braid structure we are further able to formulate a structure-based attack on the BDHCP. That is to transform the matrix equation to a system of linear equations and exploiting the structure of the corresponding extended coefficient matrix, which is induced by the left braid structure of the solution space. The structure-based attack then has an empirically high probability to solve the BDHCP with significantly less arithmetic operations than the original attack. A third improvement of the original linear algebra attack is to use an algorithm that combines Gaussian elimination with integer polynomial interpolation and the Chinese remainder theorem (CRT), instead of fast matrix multiplication as suggested by others. The major idea here is to distribute the task of solving a system of linear equations over a giant finite field to several much smaller finite fields. Based on our empirically measured bounds for the degree of the polynomials to be interpolated and the bit size of the coefficients and integers to be recovered via the CRT, we conclude an improvement of the run time complexity of the original algorithm by a factor of n^8 bit operations in the best case, and still n^6 in the worst case. KW - Linear algebra attack KW - Braid group-based cryptography KW - Row echelon form calculation using polynomial interpolation and the chinese remainder theorem KW - Diffie-Hellman conjugacy problem KW - Kryptologie KW - Zopfgruppe KW - Diffie-Hellman-Algorithmus Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6476 ER - TY - THES A1 - Tueno, Anselme T1 - Multiparty Protocols for Tree Classifiers N2 - Cryptography is the scientific study of techniques for securing information and communication against adversaries. It is about designing and analyzing encryption schemes and protocols that protect data from unauthorized reading. However, in our modern information-driven society with highly complex and interconnected information systems, encryption alone is no longer enough as it makes the data unintelligible, preventing any meaningful computation without decryption. On the one hand, data owners want to maintain control over their sensitive data. On the other hand, there is a high business incentive for collaborating with an untrusted external party. Modern cryptography encompasses different techniques, such as secure multiparty computation, homomorphic encryption or order-preserving encryption, that enable cloud users to encrypt their data before outsourcing it to the cloud while still being able to process and search on the outsourced and encrypted data without decrypting it. In this thesis, we rely on these cryptographic techniques for computing on encrypted data to propose efficient multiparty protocols for order-preserving encryption, decision tree evaluation and kth-ranked element computation. We start with Order-preserving encryption (OPE) which allows encrypting data, while still enabling efficient range queries on the encrypted data. However, OPE is symmetric limiting, the use case to one client and one server. Imagine a scenario where a Data Owner (DO) outsources encrypted data to the Cloud Service Provider (CSP) and a Data Analyst (DA) wants to execute private range queries on this data. Then either the DO must reveal its encryption key or the DA must reveal the private queries. We overcome this limitation by allowing the equivalent of a public-key OPE. Decision trees are common and very popular classifiers because they are explainable. The problem of evaluating a private decision tree on private data consists of a server holding a private decision tree and a client holding a private attribute vector. The goal is to classify the client’s input using the server’s model such that the client learns only the result of the classification, and the server learns nothing. In a first approach, we represent the tree as an array and execute only d interactive comparisons (instead of 2 d as in existing solutions), where d denotes the depth of the tree. In a second approach, we delegate the complete tree evaluation to the server using somewhat or fully homomorphic encryption where the ciphertexts are encrypted under the client’s public key. A generalization of a decision tree is a random forest that consists of many decision trees. A classification with a random forest evaluates each decision tree in the forest and outputs the classification label which occurs most often. Hence, the classification labels are ranked by their number of occurrences and the final result is the best ranked one. The best ranked element is a special case of the kth-ranked element. In this thesis, we consider the secure computation of the kth-ranked element in a distributed setting with applications in benchmarking and auctions. We propose different approaches for privately computing the kth-ranked element in a star network, using either garbled circuits or threshold homomorphic encryption. KW - Mathematik KW - Kryptologie Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8251 ER - TY - THES A1 - Taubmann, Benjamin T1 - Improving Digital Forensics and Incident Analysis in Production Environments by Using Virtual Machine Introspection N2 - Main memory forensics and its special form, virtual machine introspection (VMI), are powerful tools for digital forensics and can be used to improve the security of computer-based systems. However, their use in production systems is often not possible. This work identifies the causes and offers practical solutions to apply these techniques in cloud computing and on mobile devices to improve digital forensics and incident analysis. Four key challenges must be tackled. The first challenge is that many existing solutions are not reproducible, for example, because the corresponding software components are not available, obsolete or incompatible. The use of these tools is also often complex and can lead to a crash of the system to be monitored in case of incorrect use. To solve this problem, this thesis describes the design and implementation of Libvmtrace, which is a framework for the introspection of Linux-based virtual machines. The focus of the developed design is to implement frequently used methods in encapsulated modules so that they are easy for developers to use, optimize and test. The second challenge is that many production systems do not provide an interface for main memory forensics and virtual machine introspection. To address this problem, this thesis describes possible solutions for how such an interface can be implemented on mobile devices and in cloud environments designed to protect main memory from unprivileged access. We discuss how cold boot attacks, the ARM TrustZone and the hypervisor of cloud servers can be used to acquire data from storage. The third challenge is how to reconstruct information from main memory efficiently. This thesis describes how these questions can be solved by employing two practical examples. The first example involves extracting the keys of encrypted TLS connections from the main memory of applications to decrypt network traffic without affecting the performance of the monitored application. The TLSKex and DroidKex architecture describe two approaches to localize the keys efficiently with the help of semantic knowledge in the main memory of applications. The second example discusses how to monitor and document SSH sessions of potential attackers from outside of a virtual machine. It is important that the monitoring routines are not noticed by an attacker. To achieve this, we evaluate how to optimize the performance of the monitoring mechanism. The fourth challenge is how to deal with the performance degradation caused by introspection in productive systems. This thesis discusses how this can be achieved using the example of a SIEM system. To reduce the performance overhead, we describe how to configure the monitoring routine to collect only the information needed to detect incidents. Also, we describe two approaches that permit the monitoring routine to be dynamically adjusted at runtime to extract more information if necessary so that incidents can be better analyzed. KW - Digital Forensics KW - Virtual Machine Introspection KW - Production Environments KW - Incident Detection KW - Computerforensik KW - Eindringerkennung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8319 ER - TY - THES A1 - Kurz, Thomas T1 - Adapting Semantic Web Information Retrieval to Multimedia N2 - The amount of audio, video and image data on the Web is immensely growing, which leads to data management problems based on the hidden character of Multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the Internet of documents and the Web of Data has become a common practice. However, the value of connecting media to its semantic meta data is limited due to lacking access methods and the absence of an adapted query language specialized for media assets and fragments. This thesis aims to extend the standard query language for the Semantic Web (SPARQL) with media specific concepts and functions. The main contributions of the work are an exhaustive survey on Multimedia query languages of the last 3 decades, the SPARQL extension specification itself and an approach for the efficient evaluation of the new query concepts. Additionally I elaborate and evaluate a meta data based media fragment similarity approach, which provides a basis for further language extensions. N2 - Das Wachstum multimedialer Daten wie Audio, Video und Bilder war in den letzten Jahren immens. Das Besondere an dieser Art der Daten ist die versteckte Semantik, die sich nur schwer mit herkömmlichen Information Retrieval Funktionen verbinden lässt und dadurch zu Problemen im Management der Multimedia Daten führt. Konzepte des Semantic Web eignen sich allerdings sehr gut, diese Lücke zu schließen, was sich in vielen Szenarien bereits positiv etabliert hat. Nichtsdestotrotz fehlen mit geeigneten Zugriffsmethoden und einer adaptierten Anfragesprache wichtige Teile, um dieses Konzept der verlinkten Multimedia Daten abzurunden und voll in einem End-to-End Prozess zu verwenden. In dieser Arbeit stelle ich eine Erweiterung der Standard-Anfragesprache im Semantic Web (SPARQL) um multimedia-spezifische Funktionen vor. Der wissenschaftliche Beitrag lässt sich dabei in drei Teile gliedern: Ein umfassendes Survey zu Multimedia Anfragesprachen der letzten 30 Jahre, die Erweiterung von SPARQL inklusive einer geeigneten Methodik zur Anfrageoptimierung, sowie ein Ansatz zur fragment-basierten Ähnlichkeitsberechnung von Bildern mit zugehöriger Evaluierung. KW - SPARQL KW - Semantic Web KW - Multimedia KW - SPARQL KW - Multimedia KW - Semantic Web KW - Web of Data KW - SPARQL-MM Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8276 ER - TY - THES A1 - Ehlers, Christoph T1 - Top-k Semantic Caching N2 - The subject of this thesis is the intelligent caching of top-k queries in an environment with high latency and low throughput. In such an environment, caching can be used to reduce network traffic and improve response time. Slow database connections of mobile devices and to databases, which have been offshored, are practical use cases. A semantic cache is a query-based cache that caches query results and maintains their semantic description. It reuses partial matches of previous query results. Each query that is processed by the semantic cache is split into two disjoint parts: one that can be completely answered with tuples of the cache probe query, and another that requires tuples to be transferred from the server (remainder query). Existing semantic caches do not support top-k queries, i.e., ordered and limited queries. In this thesis, we present an innovative semantic cache that naturally supports top-k queries. The support of top-k queries in a semantic cache has considerable effects on cache elements, operations on cache elements -- like creation, difference, intersection, and union -- and query answering. Hence, we introduce new techniques for cache management and query processing. They enable the semantic cache to become a true top-k semantic cache. In addition, we have developed a new algorithm that can estimate the lower bounds of query results of sorted queries using multidimensional histograms. Using this algorithm, our top-k semantic cache is able to pipeline partial query results of top-k queries. Thereby, query execution performance can be significantly increased. We have implemented a prototype of a top-k semantic cache called IQCache (Intelligent Query Cache). An extensive and thorough evaluation with various benchmarks using our prototype demonstrates the applicability and performance of top-k semantic caching in practice. The experiments prove that the top-k semantic cache invariably outperforms simple hash-based caching strategies and scales very well. KW - Database KW - Caching KW - Semantic Caching KW - Top-k KW - Mobile KW - Semantisches Caching KW - Abfrageverarbeitung Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3055 ER - TY - THES A1 - Braun, Bastian T1 - Web-based Secure Application Control N2 - The world wide web today serves as a distributed application platform. Its origins, however, go back to a simple delivery network for static hypertexts. The legacy from these days can still be observed in the communication protocol used by increasingly sophisticated clients and applications. This thesis identifies the actual security requirements of modern web applications and shows that HTTP does not fit them: user and application authentication, message integrity and confidentiality, control-flow integrity, and application-to-application authorization. We explore the other protocols in the web stack and work out why they can not fill the gap. Our analysis shows that the underlying problem is the connectionless property of HTTP. However, history shows that a fresh start with web communication is far from realistic. As a consequence, we come up with approaches that contribute to meet the identified requirements. We first present impersonation attack vectors that begin before the actual user authentication, i.e. when secure web interaction and authentication seem to be unnecessary. Session fixation attacks exploit a responsibility mismatch between the web developer and the used web application framework. We describe and compare three countermeasures on different implementation levels: on the source code level, on the framework level, and on the network level as a reverse proxy. Then, we explain how the authentication credentials that are transmitted for the user login, i.e. the password, and for session tracking, i.e. the session cookie, can be complemented by browser-stored and user-based secrets respectively. This way, an attacker can not hijack user accounts only by phishing the user's password because an additional browser-based secret is required for login. Also, the class of well-known session hijacking attacks is mitigated because a secret only known by the user must be provided in order to perform critical actions. In the next step, we explore alternative approaches to static authentication credentials. Our approach implements a trusted UI and a mutually authenticated session using signatures as a means to authenticate requests. This way, it establishes a trusted path between the user and the web application without exchanging reusable authentication credentials. As a downside, this approach requires support on the client side and on the server side in order to provide maximum protection. Another approach avoids client-side support but can not implement a trusted UI and is thus susceptible to phishing and clickjacking attacks. Our approaches described so far increase the security level of all web communication at all time. This is why we investigate adaptive security policies that fit the actual risk instead of permanently restricting all kinds of communication including non-critical requests. We develop a smart browser extension that detects when the user is authenticated on a website meaning that she can be impersonated because all requests carry her identity proof. Uncritical communication, however, is released from restrictions to enable all intended web features. Finally, we focus on attacks targeting a web application's control-flow integrity. We explain them thoroughly, check whether current web application frameworks provide means for protection, and implement two approaches to protect web applications: The first approach is an extension for a web application framework and provides protection based on its configuration by checking all requests for policy conformity. The second approach generates its own policies ad hoc based on the observed web traffic and assuming that regular users only click on links and buttons and fill forms but do not craft requests to protected resources. N2 - Das heutige World Wide Web ist eine verteilte Plattform für Anwendungen aller Art: von einfachen Webseiten über Online Banking, E-Mail, multimediale Unterhaltung bis hin zu intelligenten vernetzten Häusern und Städten. Seine Ursprünge liegen allerdings in einem einfachen Netzwerk zur Übermittlung statischer Inhalte auf der Basis von Hypertexten. Diese Ursprünge lassen sich noch immer im verwendeten Kommunikationsprotokoll HTTP identifizieren. In dieser Arbeit untersuchen wir die Sicherheitsanforderungen moderner Web-Anwendungen und zeigen, dass HTTP diese Anforderungen nicht erfüllen kann. Zu diesen Anforderungen gehören die Authentifikation von Benutzern und Anwendungen, die Integrität und Vertraulichkeit von Nachrichten, Kontrollflussintegrität und die gegenseitige Autorisierung von Anwendungen. Wir untersuchen die Web-Protokolle auf den unteren Netzwerk-Schichten und zeigen, dass auch sie nicht die Sicherheitsanforderungen erfüllen können. Unsere Analyse zeigt, dass das grundlegende Problem in der Verbindungslosigkeit von HTTP zu finden ist. Allerdings hat die Geschichte gezeigt, dass ein Neustart mit einem verbesserten Protokoll keine Option für ein gewachsenes System wie das World Wide Web ist. Aus diesem Grund beschäftigt sich diese Arbeit mit unseren Beiträgen zu sicherer Web-Kommunikation auf der Basis des existierenden verbindungslosen HTTP. Wir beginnen mit der Beschreibung von Session Fixation-Angriffen, die bereits vor der eigentlichen Anmeldung des Benutzers an der Web-Anwendung beginnen und im Erfolgsfall die temporäre Übernahme des Benutzerkontos erlauben. Wir präsentieren drei Gegenmaßnahmen, die je nach Eingriffsmöglichkeiten in die Web-Anwendung umgesetzt werden können. Als nächstes gehen wir auf das Problem ein, dass Zugangsdaten im WWW sowohl zwischen den Teilnehmern zu Authentifikationszwecken kommuniziert werden als auch für jeden, der Kenntnis dieser Daten erlangt, wiederverwendbar sind. Unsere Ansätze binden das Benutzerpasswort an ein im Browser gespeichertes Authentifikationsmerkmal und das sog. Session-Cookie an ein Geheimnis, das nur dem Benutzer und der Web-Anwendung bekannt ist. Auf diese Weise kann ein Angreifer weder ein gestohlenes Passwort noch ein Session-Cookie allein zum Zugriff auf das Benutzerkonto verwenden. Darauffolgend beschreiben wir ein Authentifikationsprotokoll, das vollständig auf die Übermittlung geheimer Zugangsdaten verzichtet. Unser Ansatz implementiert eine vertrauenswürdige Benutzeroberfläche und wirkt so gegen die Manipulation derselben in herkömmlichen Browsern. Während die bisherigen Ansätze die Sicherheit jeglicher Web-Kommunikation erhöhen, widmen wir uns der Frage, inwiefern ein intelligenter Browser den Benutzer - wenn nötig - vor Angriffen bewahren kann und - wenn möglich - eine ungehinderte Kommunikation ermöglichen kann. Damit trägt unser Ansatz zur Akzeptanz von Sicherheitslösungen bei, die ansonsten regelmäßig als lästige Einschränkungen empfunden werden. Schließlich legen wir den Fokus auf die Kontrollflussintegrität von Web-Anwendungen. Bösartige Benutzer können den Zustand von Anwendungen durch speziell präparierte Folgen von Anfragen in ihrem Sinne manipulieren. Unsere Ansätze filtern Benutzeranfragen, die von der Anwendung nicht erwartet wurden, und lassen nur solche Anfragen passieren, die von der Anwendung ordnungsgemäß verarbeitet werden können. KW - Computersicherheit KW - Datensicherung KW - Internet Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3048 ER - TY - THES A1 - Tran, Nguyen Khanh Linh T1 - Kaehler Differential Algebras for 0-Dimensional Schemes and Applications N2 - The aim of this dissertation is to investigate Kaehler differential algebras and their Hilbert functions for 0-dimensional schemes in P^n. First we give relations between Kaehler differential 1-forms of fat point schemes and another fat point schemes. Then we determine the Hilbert polynomial and give a sharp bound for the regularity index of the module of Kaehler differential m-forms, for 05%) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms. KW - Entity Linking KW - Entity Disambiguation KW - Neuronal Networks KW - Embeddings Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3704 ER - TY - THES A1 - Alshawish, Ali T1 - Risk-based Security Management in Critical Infrastructure Organizations N2 - Critical infrastructure and contemporary business organizations are experiencing an ongoing paradigm shift of business towards more collaboration and agility. On the one hand, this shift seeks to enhance business efficiency, coordinate large-scale distribution operations, and manage complex supply chains. But, on the other hand, it makes traditional security practices such as firewalls and other perimeter defenses insufficient. Therefore, concerns over risks like terrorism, crime, and business revenue loss increasingly impose the need for enhancing and managing security within the boundaries of these systems so that unwanted incidents (e.g., potential intrusions) can still be detected with higher probabilities. To this end, critical infrastructure organizations step up their efforts to investigate new possibilities for actively engaging in situational awareness practices to ensure a high level of persistent monitoring as well as on-site observation. Compliance with security standards is necessary to ensure that organizations meet regulatory requirements mostly shaped by a set of best practices. Nevertheless, it does not necessarily result in a coherent security strategy that considers the different aims and practical constraints of each organization. In this regard, there is an increasingly growing demand for risk-based security management approaches that enable critical infrastructures to focus their efforts on mitigating the risks to which they are exposed. Broadly speaking, security management involves the identification, assessment, and evaluation of long-term (or overall) objectives and interests as well as the means of achieving them. Due to the critical role of such systems, their decision-makers tend to enhance the system resilience against very unpleasant outcomes and severe consequences. That is, they seek to avoid decision options associated with likely extreme risks in the first place. Practically speaking, this risk attitude can significantly influence the decision-making process in such critical organizations. Towards incorporating the aversion to extreme risks into security management decisions, this thesis investigates thoroughly the capabilities of a recently emerged theory of games with payoffs that are probability distributions. Unlike traditional optimization techniques, this theory provides an alternative decision technique that is more robust to extreme risks and uncertainty. Furthermore, this thesis proposes a new method that gives a decision maker more control over the decision-making process through defining loss regions with different importance levels according to people's risk attitudes. In this way, the static decision analysis used in the distribution-valued games is transformed into a dynamic process to adapt to different subjective risk attitudes or account for future changes in the decision caused by a learning process or other changes in the context. Throughout their different parts, this thesis shows how theoretical models, simulation, and risk assessment models can be combined into practical solutions. In this context, it deals with three facets of security management: allocating limited security resources, prioritizing security actions, and tweaking decision making. Finally, the author discusses experiences and limitations distilled from this research and from investigating the new theory of games, which can be taken into account in future approaches. KW - Security Management KW - Game Theory KW - Critical Infrastructures KW - Risk Attitude KW - Uncertainty KW - Spieltheorie KW - Risikomanagement Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10026 ER - TY - THES A1 - Silva, Vivian dos Santos T1 - A Composite Syntactic-Semantic Interpretable Text Entailment Approach Exploring Commonsense Knowledge Graphs N2 - Natural Language Processing has an important role in Artificial Intelligence for easing human-machine interaction. Processing human language, though, poses many challenges, among which is the semantics-related phenomenon known as language variability, the fact that the same thing can be said in several ways. NLP applications' inputs and outputs can be expressed in different forms, whose equivalence can be verified through inference. The textual entailment paradigm was established to enable the creation of a unifying framework for applied inference, providing a means of delivering other NLP task from handling inference issues in an ad-hoc manner, using instead the outputs of an inference-dedicated mechanism. Text entailment, the task of determining whether a piece of text logically follows from another piece of text, involves different scenarios, which can range from a simple syntactic variation to more complex semantic relationships between sentences. However, most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. The commonsense world knowledge necessary to support more complex inferences is also usually employed in a limited way, with most approaches sticking to shallow semantic information, leaving more elaborate semantic relationships aside. Furthermore, most systems still work as a "black box", providing a yes/no answer that does not explain the underlying reasoning process. This thesis aims at addressing these issues by proposing a composite interpretable approach for recognizing text entailment where the entailment pair is analyzed so the most relevant phenomenon is detected and the suitable method can be used to solve it. Syntactic variations are dealt with through the analysis of the sentences' syntactic structures, and semantic relationships are detected with the aid of a knowledge graph built from natural language dictionary definitions. Also, if a semantic matching is involved, the answer is made interpretable through the generation of natural language justifications that explain the semantic relationship between the pieces of text. The result is the XTE - Explainable Text Entailment - a system that outperforms well-established tools based on single-technique entailment algorithms, and that also gives an important step towards Explainable AI, allowing the inference model interpretation, making the semantic reasoning process explicit and understandable. KW - Textual Entailment KW - Knowledge Graph KW - Semantic Interpretability Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10706 ER - TY - THES A1 - Opris, Andre T1 - Holomorphic Extensions in the Structure R_{an,exp} N2 - In this thesis we consider real analytic functions, i.e. functions which can be described locally as convergent power series and ask the following: Which real analytic functions definable in R_{an,exp} have a holomorphic extension which is again definable in R_{an,exp}? Finding a holomorphic extension is of course not difficult simply by power series expansion. The difficulty is to construct it in a definably way. We will not answer the question above completely, but introduce a large non trivial class of definable functions in R_{an,exp} where for example functions which are iterated compositions from either side of globally subanalytic functions and the global logarithm are contained. We call them restricted log-exp-analytic. After giving some preliminary results like preparation theorems and Tamm's Theorem for this class of functions we are able to show that real analytic restricted log-exp-analytic functions have a holomorphic extension which is again restricted log-exp-analytic. KW - O-Minimality KW - Preparation Theorems KW - Restricted Log-Exp-Analytic Functions KW - Complexification KW - Tamm's Theorem KW - O-Minimalität Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10691 ER - TY - THES A1 - Schmid, Josef T1 - Learning-Based Quality of Service Prediction in Cellular Vehicle Communication N2 - Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. A new area where this development is on the rise is the field of connected vehicles. It is especially useful for automated vehicles in order to connect the vehicles with other road users or cloud services. In particular for the latter it is beneficial to establish a mobile network connection, as it is already widely used and no additional infrastructure is needed. With the use of network communication, certain requirements come along. One of them is the reliability of the connection. Certain Quality of Service (QoS) parameters need to be met. In case of degraded QoS, according to the SAE level specification, a downgrade of the automated system can be required, which may lead to a takeover maneuver, in which control is returned back to the driver. Since such a handover takes time, prediction is necessary to forecast the network quality for the next few seconds. Prediction of QoS parameters, especially in terms of Throughput (TP) and Latency (LA), is still a challenging task, as the wireless transmission properties of a moving mobile network connection are undergoing fluctuation. In this thesis, a new approach for prediction Network Quality Parameters (NQPs) on Transmission Control Protocol (TCP) level is presented. It combines the knowledge of the environment with the low level parameters of the mobile network. The aim of this work is to perform a comprehensive study of various models including both Location Smoothing (LS) grid maps and Learning Based (LB) regression ones. Moreover, the possibility of using the location independence of a model as well as suitability for automated driving is evaluated. N2 - Netzwerkkommunikation ist zu einem Teil des täglichen Lebens geworden, und die Vernetzung von Geräten und Menschen wird in Zukunft noch weiter zunehmen. Ein neuer Bereich, in dem diese Entwicklung zunimmt, sind vernetzte Fahrzeuge. Es ist vorteilhaft automatisierte Fahrzeuge mit anderen Verkehrsteilnehmern oder Cloud-Diensten zu verbinden. Insbesondere für letztere ist der Einsatz einer mobilen Netzwerkverbindung zweckmäßig, da sie bereits weit verbreitet ist und keine zusätzliche Infrastruktur erfordert. Mit der Nutzung des Netzwerkes gehen auch einige Anforderungen einher. Die Zuverlässigkeit der Verbindung ist entscheidend. Kann keine ausreichende Qualität der Verbindung erfüllt werden kann nach SAE Spezifikation das Herunterstufen der Automatisierungsstufe erforderlich sein. In letzter Konsequenz kann diese schließlich zu einem Übernahmemanöver führen, wobei die Kontrolle wieder an den Fahrer zurückgegeben wird. Da ein solcher Wechsel Zeit in Anspruch nimmt, ist eine Vorhersage erforderlich, um die Netzqualität in den nächsten Sekunden zu prognostizieren. Eine solche Vorhersage der Dienstgüte (Quality of Service (QoS)), besonders hinsichtlich Durchsatz und Latenz, nach wie vor eine recht anspruchsvolle Aufgabe, da die drahtlosen Übertragungseigenschaften einer sich bewegenden mobilen Netzwerkverbindung großen Schwankungen unterliegen. In dieser Dissertation wird ein neuer Ansatz für die Vorhersage von Network Quality Parameters (NQPs) auf der Ebene des Transmission Control Protocol (TCP) vorgestellt. Er kombiniert das Wissen der Umgebung mit den Parametern des Mobilfunknetzes. Das Ziel dieser Arbeit ist eine umfangreiche Untersuchung verschiedener Modelle, darunter sind sowohl Lokalisationsglättende Kachel-Karten wie auch Regressionsverfahren aus dem Bereich des Maschinellen Lernens. Darüber hinaus wird dessen die Möglichkeit der Nutzung der Ortsunabhängigkeit eines Modells erörtert sowie Eignung für automatisiertes Fahren evaluiert. Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10772 ER - TY - THES A1 - Niklaus, Christina T1 - From Complex Sentences to a Formal Semantic Representation using Syntactic Text Simplification and Open Information Extraction N2 - Sentences that present a complex linguistic structure act as a major stumbling block for Natural Language Processing (NLP) applications whose predictive quality deteriorates with sentence length and complexity. The task of Text Simplification (TS) may remedy this situation. It aims to modify sentences in order to make them easier to process, using a set of rewriting operations, such as reordering, deletion or splitting. These transformations are executed with the objective of converting the input into a simplified output, while preserving its main idea and keeping it grammatically sound. State-of-the-art syntactic TS approaches suffer from two major drawbacks: first, they follow a very conservative approach in that they tend to retain the input rather than transforming it, and second, they ignore the cohesive nature of texts, where context spread across clauses or sentences is needed to infer the true meaning of a statement. To address these problems, we present a discourse-aware TS framework that is able to split and rephrase complex English sentences within the semantic context in which they occur. By generating a fine-grained output with a simple canonical structure that is easy to analyze by downstream applications, we tackle the first issue. For this purpose, we decompose a source sentence into smaller units by using a linguistically grounded transformation stage. The result is a set of selfcontained propositions, with each of them presenting a minimal semantic unit. To address the second concern, we suggest not only to split the input into isolated sentences, but to also incorporate the semantic context in the form of hierarchical structures and semantic relationships between the split propositions. In that way, we generate a semantic hierarchy of minimal propositions that benefits downstream Open Information Extraction (IE) tasks. To function well, the TS approach that we propose requires syntactically well-formed input sentences. It targets generalpurpose texts in English, such as newswire or Wikipedia articles, which commonly contain a high proportion of complex assertions. In a second step, we present a method that allows state-of-the-art Open IE systems to leverage the semantic hierarchy of simplified sentences created by our discourseaware TS approach in constructing a lightweight semantic representation of complex assertions in the form of semantically typed predicate-argument structures. In that way, important contextual information of the extracted relations is preserved that allows for a proper interpretation of the output. Thus, we address the problem of extracting incomplete, uninformative or incoherent relational tuples that is commonly to be observed in existing Open IE approaches. Moreover, assuming that shorter sentences with a more regular structure are easier to process, the extraction of relational tuples is facilitated, leading to a higher coverage and accuracy of the extracted relations when operating on the simplified sentences. Aside from taking advantage of the semantic hierarchy of minimal propositions in existing Open IE Abstract approaches, we also develop an Open IE reference system, Graphene. It implements a relation extraction pattern upon the simplified sentences. The framework we propose is evaluated within our reference TS implementation DisSim. In a comparative analysis, we demonstrate that our approach outperforms the state of the art in structural TS both in an automatic and a manual analysis. It obtains the highest score on three simplification datasets from two different domains with regard to SAMSA (0.67, 0.57, 0.54), a recently proposed metric targeted at automatically measuring the syntactic complexity of sentences which highly correlates with human judgments on structural simplicity and grammaticality. These findings are supported by the ratings from the human evaluation, which indicate that our baseline implementation DisSim returns fine-grained simplified sentences that achieve a high level of syntactic correctness and largely preserve the meaning of the input. Furthermore, a comparative analysis with the annotations contained in the RST Discourse Treebank (RST-DT) reveals that we are able to capture the contextual hierarchy between the split sentences with a precision of approximately 90% and reach an average precision of almost 70% for the classification of the rhetorical relations that hold between them. Finally, an extrinsic evaluation shows that when applying our TS framework as a pre-processing step, the performance of state-ofthe-art Open IE systems can be improved by up to 32% in precision and 30% in recall of the extracted relational tuples. Accordingly, we can conclude that our proposed discourse-aware TS approach succeeds in transforming sentences that present a complex linguistic structure into a sequence of simplified sentences that are to a large extent grammatically correct, represent atomic semantic units and preserve the meaning of the input. Moreover, the evaluation provides sufficient evidence that our framework is able to establish a semantic hierarchy between the split sentences, generating a fine-grained representation of complex assertions in the form of hierarchically ordered and semantically interconnected propositions. Finally, we demonstrate that state-of-the-art Open IE systems benefit from using our TS approach as a pre-processing step by increasing both the accuracy and coverage of the extracted relational tuples for the majority of the Open IE approaches under consideration. In addition, we outline that the semantic hierarchy of simplified sentences can be leveraged to enrich the output of existing Open IE systems with additional meta information, thus transforming the shallow semantic representation of state-of-the-art approaches into a canonical context-preserving representation of relational tuples. KW - Text Simplification KW - Syntactic Simplification KW - Open Information Extraction KW - Semantic Representation KW - Complex Sentences Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10540 ER - TY - THES A1 - Mandarawi, Waseem T1 - Multi-objective Network Virtualization and its Applicability to Industrial Networks N2 - Network virtualization provides high flexibility for deploying communication services in dense and heterogeneous environments. Two main approaches (dimensions) that are usually combined exist: Network Function Virtualization (NFV) technologies for functionality virtualization and Virtual Network Embedding (VNE) algorithms for resource virtualization. These approaches can be applied to different network levels, such as factory and enterprise levels of industrial networks. Several objectives and constraints, that might be conflicting, shall be considered when network virtualization is applied, mainly in complex topologies. This thesis proposes a network virtualization model that considers both virtualization dimensions, two network levels, and different objectives and constraints. The network levels considered are two primary levels in industrial networks. However, this consideration does not restrict the model to a particular environment or certain levels. The considered objectivities/constraints are topology, reliability, security, performance, and resource usage. Based on this model, we first build an overall combined solution for autonomic and composite virtual networking. This solution considers both virtualization dimensions, two network levels, and target objectives. Furthermore, this solution combines three novel virtualization sub-approaches that consider performance, reliability, and performance. However, the sub-approaches apply to different combinations of levels and dimensions, and the reliability approach additionally considers the resource usage objective. After presenting all solutions, we map them to the defined model. Regarding applicability to industrial networks, the combined approach is applied to an enterprise-level Industrial Internet of Things (IIoT) use case inspired by the smart factory concept in Industry 4.0. However, the sub-approaches are applied to more specific use cases. The performance and reliability solutions are integrated with relevant components of the Time Sensitive Networks (TSN) standard as a modern technology for industrial networks. The goal is to enrich the reliability and performance capabilities of TSN with the flexibility of network virtualization. In the combined approach, we compose and embed an environment-aware Extended Virtual Network (EVN) that represents the physical devices, virtual application functions, and required Service Function Chains (SFCs). We use the graph transformation method to transform abstract application requirements (represented by an Application Request (AR)) into an EVN. Both EVN composition and embedding methods consider the Substrate Network (SN) topology and different security, reliability, performance, and resource usage policies. These policies are applied with a certain priority and depend on the properties of communicating entities such as location and type. The EVN is embedded using property-based node mapping, reliability-aware branching, and a greedy chain embedding heuristic. The chain embedding heuristic is evaluated using a random topology that represents the use case. The performance sub-approach is NFV-based and is applied to a specific use case with Time-critical Traffic (TCT) flows. We develop and evaluate a complete framework for virtualizing Time-aware Shaper (TAS) using high-performance NFV. The reliability sub-approach is VNE-based and is applied to a specific factory level use case. We develop minimal and maximal branching heuristics based on a reliability-aware k-shortest path algorithm and compare them using a typical factory topology. We then integrate these algorithms with a Frame Replication and Elimination for Reliability (FRER) simulator to realize reliability policies by the autonomic and efficient configuration of a supporting technology. The security sub-approaches are related to both virtualization dimensions and are applied to generic enterprise-level use cases. However, the applicability of the security aspect to industrial networks is only shown in the combined (EVN) approach and its use case. We research the autonomic security management in Network Function Virtualization Infrastructure (NFVI) with the main goal of early reaction to threats through SFC reconfiguration through Virtual Network Function (VNF) live migration. This goal is approached by supporting the security measurements with a decision making architecture that considers, on the one hand, the threats and events in the environment and, on the other hand, the Service Level Agreement (SLA) between the NFVI provider and user. For this purpose, we classify the VNF-specific attacks and define possible early detectable behavior patterns. Finally, we develop a security-aware VNE heuristic that considers the security requirements of the Virtual Network (VN) and the security capabilities of the SN. This approach is modified in the combined approach to consider deploying virtualized security VNFs. KW - Network Virtualization KW - Industrial Networks KW - Virtual Network Embedding KW - Network Function Virtualization KW - Time Sensitive Networking Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10606 ER - TY - THES A1 - Alyousef, Ammar T1 - E-Mobility Management: Towards a Grid-friendly Smart Charging Solution N2 - Replacing fossil-fueled vehicles with Electric Vehicles (EVs) poses new challenges for power distribution networks. Specifically speaking, the electrification of the mobility sector relies on the ability to process and analyze information on when, where, for how long, or how fast charging processes will take place. Nevertheless, such kind of information is typically difficult to acquire or insufficiently predictable due to the dynamic nature of the system. Also, the increasing adoption rate of the renewable energy sources, specifically the domestic Photovoltaic (PV) systems, and the potentially associated grid defection scenarios will significantly impact the cost and efforts required to operate the grid in terms of power quality and demand-supply aspects. However, such emerging requirements have arguably not been taken into account when the distribution grid was built originally. Besides, expanding the distribution and transmission capacity is a very costly and lengthy process. Therefore, any proposed solution should be cost-effective as well as environment-, grid- and user-friendly. To this end, the advancements in Information and Communications Technology (ICT) are increasingly adopted and applied. This thesis addresses the rapidly growing EV sector and deals with the problems to overcome potential power quality degradation caused by the challenges mentioned above. Since time switch and radio ripple control as existing solutions in Germany are costly and neither very effective nor scalable as it requires hardware retrofitting of existing public Charging Stations (CSs), the primary focus of this work is the development of an appropriate, standards-based, scalable, and smart charging solution of EVs. Such a solution can, in turn, boost the usage of renewable energy by ensuring that the existing grid infrastructure can operate within its permissible limits while maintaining acceptable levels of power quality. This work introduces a new definition of the concept, “grid-friendly EV charging”, where the power demand of a CS is adjusted depending on the real-time status of a power grid. In this regard, the conflicting concerns of stakeholders in an EV ecosystem are considered. For example, a Distribution System Operator (DSO) does not want to reveal a lot of technical details about the power grid or its status. Similarly, a Charging Service Provider (CSP) wants to keep its clients happy without sharing the details of its business model with others, namely, DSOs. For that sake, a distributed smart charging architecture is proposed in this thesis. It is event-driven and responds in nearly real-time to unforeseen and critical grid situations such as high/low voltage, congestion, phase unbalance, and harmonics. In that regard, the publish/subscribe messaging pattern, used as a part of the architecture, enables an efficient and well-performing communication scheme among the different components. Moreover, an indication mechanism about the different issues in a power grid is developed; it adopts the traffic light model. It works as a black box to separate smart controllers for each CS and configured only by the CSP. Smart chargers enable a smooth adjustment of the charging power to avoid drastic changes in the grid state. To that end, two types of intelligent controllers are developed and tested. While the first controller is inspired by the fuzzy logic, the second one is inspired by the slow-start mechanism used in TCP to control congestion in computer networks. A simulative approach is applied to evaluate the solution, thereby, a topology of a real low voltage grid with realistic load and generation profiles is used. Furthermore, a set of metrics is defined regarding the main concerns of stakeholders: voltage, overloading, fairness, the satisfaction of EV users and grid operator, as well as the grid-friendly behavior of a CS/ EV user. The evaluation shows that the solution is able to guarantee a safe operation of the grid. The proposed system can ensure a grid-friendly charging by sacrificing of a small portion of user satisfaction, that sacrifice of a user is awarded via a points-based reward system. Last but not least, the proposed distributed controllers are compared to two other controllers: (1) a decentralized controller based only on sensing the local voltage and (2) a very strict centralized controller focusing on grid-friendliness. The latter ensures proportional fairness among users regarding the objective function of the optimization problem solved in each simulation step. The distributed controllers are superior to the decentralized controller in terms of grid friendly and fairness and converge in general to the centralized one. KW - E-Mobility KW - Smart Charging KW - Grid-Friendliness KW - Elektromobilität KW - Lademanagement KW - Netzstabilität Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9302 ER - TY - THES A1 - Niedermeier, Florian T1 - Power-Adaptive Computing in Future Energy Networks N2 - The current electricity grid is undergoing major changes. There is increasing pressure to move away from power generation from fossil fuels, both due to ecological concerns and fear of dependencies on scarce natural resources. Increasing the share of decentralized generation from renewable sources is a widely accepted way to a more sustainable power infrastructure. However, this comes at the price of new challenges: generation from solar or wind power is not controllable and only forecastable with limited accuracy. To compensate for the increasing volatility in power generation, exerting control on the demand side is a promising approach. By providing flexibility on demand side, imbalances between power generation and demand may be mitigated. This work is concerned with developing methods to provide grid support on demand side while limiting the associated costs. This is done in four major steps: first, the target power curve to follow is derived taking both goals of a grid authority and costs of the respective load into account. In the following, the special case of data centers as an instance of significant loads inside a power grid are focused on more closely. Data center services are adapted in a way such as to achieve the previously derived power curve. By means of hardware power demand models, the required adaptation of hardware utilization can be derived. The possibilities of adapting software services are investigated for the special use case of live video encoding. A method to minimize quality of experience loss while reducing power demand is presented. Finally, the possibility of applying probabilistic model checking to a continuous demand-response scenario is demonstrated. KW - Power-adaptive software KW - Energy systems KW - Energieversorgung KW - Software Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9993 ER - TY - THES A1 - Ansah, Frimpong T1 - Performance and optimization technologies for software defined industrial networks N2 - The concept of programmable networks is radically changing the way communication infrastructures are designed, integrated, and operated. Currently, the topic is spearheaded by concepts such as software-defined networking, forwarding and control element separation, and network function virtualization. Notably, software-defined networking has attracted significant attention in telecommunication and data centers and thus already in some production-grade networks. Despite the prevalence of software-defined networking in these domains, industrial networks are yet to see its benefits to encourage adoption. However, the misconceptions around the concept itself, the role of virtualization, and algorithms pose a significant obstacle. Furthermore, the desire to accommodate new services in the automation industry results in a pattern of constantly increasing complexity of industrial networks, which is compounded by the requirement to provide stringent deterministic service guarantees considering characteristically different applications and thus posing a significant challenge for management, configuration, and maintenance as existing solutions are architecturally inflexible. Therefore, the first contribution of this thesis addresses the misconceptions around software-defined networking by providing a comparative analysis of programmable network concepts, detailing where software-defined networks compare with other concepts and how its principles can be leveraged to evolve industrial networks. Armed with the fundamental principles of programmable networks, the second contribution identifies virtualization technologies and proposes novel algorithms to provide varied quality of service guarantees on converged time-sensitive Ethernet networks using software-defined networking concepts. Finally, a performance analysis of a software-defined hybrid deployment solution for control and management of time-sensitive Ethernet networks that integrates proposed novel algorithms is presented as an industrial use-case that enables industrial operators to harness the full potential of time-sensitive networks. KW - Performance KW - Software Defined Industrial Networks KW - Virtual Network Embedding KW - Schedulability Analysis KW - Worst-case Delay Analysis KW - Software Defined Industrial Networks KW - Deterministic Petri-net and Queuing networks KW - Virtual Network Embedding and Worst-case Delay Analysis KW - Schedulability Analysis KW - Performance Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9002 PB - Universität Passau CY - Passau ER - TY - THES A1 - Lang, Thomas T1 - AI-Supported Interactive Segmentation of 3D Volumes N2 - The segmentation of volumetric datasets, i.e., the partitioning of the data into disjoint sub-volumes with the goal to extract information about these regions,is a difficult problem and has been discussed in medical imaging for decades. Due to the ever-increasing imaging capabilities, in particular in X-ray computed tomography (CT) or magnetic resonance imaging, segmentation in industrial applications also gains interest. Especially in industrial applications the generated datasets increase in size. Hence, most applications apply well-known techniques in a 2+1-dimensional manner,i.e., they apply image segmentation procedures on each slice separately and track the progress along the axis of the volume in which the slices are stacked on. This discards the information on preceding or subsequent slices, which is often assumed to be nearly identical. However, in the industrial context this might prove wrong since industrial parts might change their appearance significantly over the course of even a few slices. Moreover, artifacts can further distort the content of the slices. Therefore, three-dimensional processing of voxel volumes has to be preferred, which induces constraints upon the segmentation procedures. For example, they must not consider global information as it is usually not feasible in big scans to compute them efficiently. Yet another frequent problem is that applications focus on individual parts only and algorithms are tailored to that case. Most prominent medical segmentation procedures do so by applying methods to specifically find the liver and only the liver of a patient, for example. The implication is that the same method then cannot be applied to find other parts of the scan and such methods have to be designed individually for any object to be segmented. Flexible segmentation methods are needed too specifically when partitioning unique scans. We define a unique scan to be a voxel dataset for which no comparable volume exists. Classical examples include the use case of cultural heritage where not only the objects themselves are unique but also scan parameters are optimized to obtain the best image quality possible for that specific scan. This thesis aims at introducing novel methods for voxelwise classifications based on local geometric features. The latter are computed from local environments around each voxel and extract information in similar ways as humans do, namely by observing their similarity to geometric or textural primitives. These features serve as the foundation to learning the proposed voxelwise classifiers and to discriminate between segmented and unsegmented voxels. On the one hand, they perform fully automated clustering of volumes for which a representative random sample is extracted first. On the other hand, a set of segmenting classifiers can be trained from few seed voxels, i.e., volume elements for which a domain expert marked if they belong to the components that shall be segmented. The interactive selection offers the advantage that no completely labeled voxel volumes are necessary and hence that unique scans of objects can be segmented for which no comparable scans exist. Overall, it will be shown that all proposed segmentation methods are effectively of linear runtime with respect to the number of voxels in the volume. Thus, voxel volumes without size restrictions can be segmented in an efficient linear pass through the volume. Finally, the segmentation performance is evaluated on selected datasets which shows that the introduced methods can achieve good results on scans from a broad variety of domains for both small and big voxel volumes. N2 - Die Segmentierung von Volumendaten, also die Partitionierung der Daten in disjunkte Teilvolumen zur weiteren Informationsextraktion, ist ein Problem, welches in der medizinischen Bildverarbeitung seit Jahrzehnten behandelt wird. Bedingt durch die sich ständig verbessernden Bilderfassungsmethoden, speziell im Bereich der Röntgen-Computertomographie (CT) oder der Magnetresonanztomographie, gewinnt die Segmentierung von industriellen Volumendaten auch an Wichtigkeit. Insbesondere im industriellen Kontext steigt die Größe der zu segmentierenden Daten jedoch rasant an, so dass sich die meisten Segmentierungsapplikationen auf den 2+1-dimensionalen Fall beschränken, also Bilder verarbeiten und die Ergebnisse über mehrere Bilder hinweg verfolgen. Jedoch werden somit beispielsweise geometrische Informationen über benachbarte Schichten ignoriert. Diese können sich aber gerade im industriellen Bereich signifikant ändern. Aus diesem Grund ist hier die dreidimensionale Bildverarbeitung vorzuziehen. Dadurch ergeben sich neue Einschränkungen, beispielsweise können keine globalen Informationen zur Segmentierung herangezogen werden, da diese typischerweise nicht effizient berechenbar sind. Ferner fokussieren sich dreidimensionale Methoden aus medizinischen Bereichen zumeist auf bestimmte Bestandteile der Daten, wie einzelne Organe. Dies schränkt die Generalität dieser Methoden signifikant ein und somit sind separate Verfahren für jedes zu segmentierende Objekt notwendig. Flexible Methoden sind darüber hinaus bei Anwendung auf einzigartige Scans erforderlich. Ein einzigartiger Scan ist ein Voxelvolumen, für welches kein vergleichbares Datum existiert. Klassische Beispiele sind Kulturgutdigitalisate, da dort nicht nur die Objekte einzigartig sind, sondern auch die Aufnahmeparameter spezifisch für diesen einen Scan optimiert wurden. Die vorliegende Dissertation führt neuartige Methoden zur voxelweisen dreidimensionalen Segmentierung von Volumendaten auf Basis lokaler geometrischer Informationen ein. Die Bewertung dieser Informationen imitiert die menschliche Objektwahrnehmung, indem lokale Regionen mit geometrischen oder strukturellen Primitiven verglichen werden. Mit Hilfe dieser Bewertungen werden voxelweise anzuwendende Klassifikatoren trainiert, welche zwischen erwünschten und unerwünschten Voxeln unterscheiden sollen. Ein Teil dieser Klassifikatoren führt eine vollautomatische Clustering-Analyse durch, nachdem eine repräsentative und zufällig ausgewählte Teilmenge fester Größe an Voxeln selektiert wurde. Die verbliebenen Segmentierungsalgorithmen erhalten Trainingsdaten in Form von Seed-Voxeln, also wenige Volumenelemente, die von einem Domänenexperten markiert wurden. Diese interaktive Herangehensweise ermöglicht das Einbringen von Expertenwissen ohne die Notwendigkeit vollständig annotierter Trainingsvolumen, wodurch auch einzigartige Scans segmentiert werden können. Für alle Verfahren wird dargelegt, dass die eingeführten Algorithmen von asymptotisch linearer Laufzeit in der Anzahl der Voxel im Volumen sind. Somit können Voxeldaten ohne Größenbeschränkungen in einem effizienten linearen Durchgang verarbeitet werden. Abschließend wird die Performanz der vorgestellten Verfahren auf ausgewählten Daten evaluiert und aufgezeigt, dass mit denselben wenigen Verfahren gute Ergebnisse auf vielen unterschiedlichen Domänen und gleichfalls auf kleinen und großen Volumen erzielt werden können. KW - Segmentation KW - Computed Tomography KW - Artificial Intelligence KW - Active Learning KW - Interactive KW - Machine learning KW - Image processing Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9221 ER - TY - THES A1 - Salehi Rizi, Fatemeh T1 - Graph Representation Learning for Social Networks N2 - Online social networks provide a rich source of information about millions of users worldwide. However, due to sparsity and complex structure, analyzing these networks is quite challenging and expensive. Recently, graph embedding emerged to map networked data into low-dimensional representations, i.e. vector embeddings. These representations are fed into off-the-shelf machine learning algorithms to simplify and speed up graph analytic tasks. Given the immense importance of social network analysis, in this thesis, we aim to study graph embedding for social networks in three directions. Firstly, we focus on social networks at microscopic level to primarily encode the structural characteristic of users' personal networks so-called ego networks. These representations are utilized in evaluation tasks whose performance depends on relational information from direct neighbors. For example, social circle prediction and event attendance inference both need structural information from neighbors in social networks. Secondly, we explore assessing the content of vector embeddings in terms of topological properties. This could be explained via two proposed approaches: 1) a learning to rank algorithm in which the model weights reveal the importance of properties at subgraph level (ego networks), 2) a regression model for direct approximation of network statistical properties at vertex level. Thirdly, we propose extensions of graph embedding to capture sign or additional content of social networks. Users in social media often express their feelings and attitudes towards others which forms sentiment links besides social links. We design a joint objective function whose terms capture semantics of both social and sentiment links simultaneously. We also propose a multi-task learning framework for networks with attributes and labels by stacking autoencoders. The weights of the learning tasks are automatically assigned via an adaptive loss weighting layer. Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9211 ER - TY - JOUR A1 - Basmadjian, Robert T1 - Flexibility-Based Energy and Demand Management in Data Centers BT - a Case Study for Cloud Computing JF - Energies N2 - The power demand (kW) and energy consumption (kWh) of data centers were augmenteddrastically due to the increased communication and computation needs of IT services. Leveragingdemand and energy management within data centers is a necessity. Thanks to the automated ICTinfrastructure empowered by the IoT technology, such types of management are becoming more feasiblethan ever. In this paper, we look at management from two different perspectives: (1) minimization of theoverall energy consumption and (2) reduction of peak power demand during demand-response periods.Both perspectives have a positive impact on total cost of ownership for data centers. We exhaustivelyreviewed the potential mechanisms in data centers that provided flexibilities together with flexiblecontracts such as green service level and supply-demand agreements. We extended state-of-the-artby introducing the methodological building blocks and foundations of management systems for theabove mentioned two perspectives. We validated our results by conducting experiments on a lab-gradescale cloud computing data center at the premises of HPE in Milano. The obtained results support thetheoretical model, by highlighting the excellent potential of flexible service level agreements in Green IT:33% of overall energy savings and 50% of power demand reduction during demand-response periods inthe case of data center federation. Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9251 VL - 2019 IS - 12 SP - 1 EP - 22 PB - MDPI CY - Basel ER - TY - THES A1 - Schmid, Matthias T1 - Towards Storing 3D Model Graphs in Relational Databases N2 - The increasing relevance of massive graph data reinforces the need for adequate graph data management. While several graph database engines have been developed, the storage of graph data in a relational database management system, and therefore the seamless integration into existing information systems remains an open challenge. Motivated by the use case to integrate Building Information Modeling (BIM) data into the MonArch system, we propose a solution that transforms the BIM data into a property graph and stores this graph in the database system. We present a novel approach to efficiently store property graph data in a relational database management system using JSON functionality and redundant storage of edges in adjacency lists and show how to import huge data sets into this schema. Applying this approach, we import data sets of up to nearly 1 TB of disk space within the relational database, while only having 96 GB of main memory available. We also present a new approach of how to retrieve data from this database schema, translating queries written in the popular property graph query language Cypher into SQL. Hence, we provide an intuitive way to write semantically complex queries. We also demonstrate the efficiency of our approach using the standardized Linked Data Benchmark Council – Social Network Benchmark (LDBC - SNB) framework. Our approach increases the throughput for this benchmark by up to 85 times, compared to existing approaches for RDBMS. In addition, we propose a new method to transform BIM data into the property graph model and how to apply the aforementioned property graph storage to this data. We can import IFC models of up to 300 MB within five minutes. We show the suitability of our approach using our own use case specific benchmark, which we integrated into the previously mentioned Social Network Benchmark. For our interactive use case-specific queries, we achieve response times faster than 5 ms in 99% of all executions. Finally, we present how the aforementioned approach to store BIM data in a relational database management system is integrated into the existing MonArch system by splitting the different functionalities of our approach into a microservice architecture. N2 - Die steigende Relevanz von riesigen Graphdatenmengen verstärkt die Notwendigkeit von adäquatem Graphdaten Management. Während bereits mehrere Graphdatenbanken entwickelt wurden, bleibt die Speicherung von Graphdaten in relationalen Datenbanken und die damit verbundene nahtlose Integration in bereits existierende Informationssysteme eine ungelöste Herausforderung. Motiviert durch unseren eigenen Anwendungsfall Building Information Modeling (BIM)Daten in das MonArch Informationssystem zu integrieren, schlagen wir einen Ansatz vor, BIM Daten in eine Property Graph Form umzuwandeln und diesen in der Datenbank zu speichern. Um dies zu erreichen, stellen wir einen neuartigen Ansatz vor, um Property Graphen in einem relationalen Datenbanksystem zu speichern, indem wir Funktionalitäten wie JSON und die redundante Speicherung von Kanten in Adjazenzlisten kombinieren und zeigen wie große Mengen dieser Daten in das Schema importiert werden können. Durch die Anwendung unseres Ansatzes können wir Datensätze von bis zu 1 TB in das Datenbanksystem importieren, während wir nur 96 GB Hauptspeicher zur Verfügung haben. Wir stellen außerdem einen neuen Ansatz vor, um Daten aus dem zuvor genannten Schema abzufragen, indem wir die beliebte Graphanfragesprache Cypher in die Sprache SQL übersetzen. Dadurch erreichen wir eine intuitive Art semantisch komplexe Anfragen zu schreiben. Zusätzlich zeigen wir die Effizienz unseres Ansatzes, indem wir das standardisierte Evaluationsframework Social Network Benchmark des Linked Data Benchmar Council (LDBC – SNB) verwenden. Unser Ansatz erhöht den Durchsatz dieses Benchmarks, im Vergleich zu existierenden Ansätzen für relationale Datenbanksysteme, auf einen bis zu 85-fachen Durchsatz. Zusätzlich schlagen wir eine neue Methode vor, um BIM Daten in das Property Graph Modell zu übertragen und wie das zuvor vorgestellte Speichermodel verwendet werden kann, um diese Daten zu speichern. Damit können wir IFC Modelle mit bis zu 300 MB in unter 5 Minuten in unser System importieren. Schließlich zeigen wir die Eignung unseres Ansatzes, indem wir einen eigenen Benchmark spezifisch für unseren Anwendungsfall verwenden, welchen wir in den zuvor erwähnte Social Network Benchmark integriert haben. Für unsere anwendungsfallspezifischen Anfragen erreichen wir Antwortzeiten von unter 5 ms in 99% der Ausführungen. KW - Graph-based database models KW - Relational database model KW - Property graph KW - Industry Foundation Classes KW - IFC Store Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10353 ER - TY - THES A1 - Bermeitinger, Bernhard T1 - Investigating a Second-Order Optimization Strategy for Neural Networks N2 - In summary, this cumulative dissertation investigates the application of the conjugate gradient method CG for the optimization of artificial neural networks (NNs) and compares this method with common first-order optimization methods, especially the stochastic gradient descent (SGD). The presented research results show that CG can effectively optimize both small and very large networks. However, the default machine precision of 32 bits can lead to problems. The best results are only achieved in 64-bits computations. The research also emphasizes the importance of the initialization of the NNs’ trainable parameters and shows that an initialization using singular value decomposition (SVD) leads to drastically lower error values. Surprisingly, shallow but wide NNs, both in Transformer and CNN architectures, often perform better than their deeper counterparts. Overall, the research results recommend a re-evaluation of the previous preference for extremely deep NNs and emphasize the potential of CG as an optimization method. N2 - Zusammenfassend untersucht die vorliegende kumulative Dissertation die Anwendung des konjugierten Gradienten (CG) zur Optimierung künstlicher neuronaler Netzwerke (NNs) und vergleicht diese Methode mit verbreiteten Optimierungsverfahren erster Ordnung, insbesondere dem Stochastischem Gradientenabstieg (SGD). Die in den Arbeiten präsentierten Forschungsergebnisse zeigen, dass CG in der Lage ist, sowohl kleinere als auch sehr große Netzwerke effektiv zu optimieren. Allerdings kann die Maschinen- genauigkeit bei 32-Bit-Berechnungen zu Problemen führen, beste Ergebnisse werden erst in 64-Bit-Fließkommazahlen erreicht. Die Forschung betont auch die Bedeutung der Initialisierung der NN-Parameter und zeigt, dass eine Initialisierung mittels Singulärwertzerlegung zu deutlich geringeren Fehlerwerten führt. Überraschenderweise erzielen flachere NNs bessere Ergebnisse als tiefe NNs mit einer vergleichbaren Anzahl an trainierbaren Parametern, unabhängig vom jeweiligen NN, das die künstlichen Daten erzeugt. Es zeigt sich auch, dass flache, breite NNs, sowohl in Transformer-, als auch in CNN-Architekturen oft besser abschneiden als ihre tieferen Gegenstücke. Insgesamt empfehlen die Forschungsergebnisse eine Neubewertung der bisherigen Präferenz für extrem tiefe NNs und betonen das Potential von CG als Optimierungsmethode. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14087 ER - TY - JOUR A1 - Frank, Florian A1 - Böttger, Simon A1 - Mexis, Nico A1 - Anagnostopoulos, Nikolaos Athanasios A1 - Mohamed, Ali A1 - Hartmann, Martin A1 - Kuhn, Harald A1 - Helke, Christian A1 - Arul, Tolga A1 - Katzenbeisser, Stefan A1 - Hermann, Sascha T1 - CNT-PUFs: highly robust and heat-tolerant carbon-nanotube-based physical unclonable functions N2 - In this work, we explored a highly robust and unique Physical Unclonable Function (PUF) based on the stochastic assembly of single-walled Carbon NanoTubes (CNTs) integrated within a wafer-level technology. Our work demonstrated that the proposed CNT-based PUFs are exceptionally robust with an average fractional intra-device Hamming distance well below 0.01 both at room temperature and under varying temperatures in the range from 23 °C to 120 °C. We attributed the excellent heat tolerance to comparatively low activation energies of less than 40 meV extracted from an Arrhenius plot. As the number of unstable bits in the examined implementation is extremely low, our devices allow for a lightweight and simple error correction, just by selecting stable cells, thereby diminishing the need for complex error correction. Through a significant number of tests, we demonstrated the capability of novel nanomaterial devices to serve as highly efficient hardware security primitives. KW - Carbon NanoTube (CNT) KW - Physical Unclonable Function (PUF) KW - Nanomaterials (NMs) KW - hardware security KW - security KW - privacy KW - Internet of Things (IoT) Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14011 VL - 2023 IS - 13(22) PB - MDPI CY - Basel ER - TY - THES A1 - Fink, Simon Dominik T1 - Constrained Planarity Algorithms in Theory and Practice N2 - In the constrained planarity setting, we ask whether a graph admits a crossing-free drawing that additionally satisfies a given set of constraints. These constraints are often derived from very natural problems; prominent examples are Level Planarity, where vertices have to lie on given horizontal lines indicating a hierarchy, Partially Embedded Planarity, where we extend a given drawing without modifying already-drawn parts, and Clustered Planarity, where we additionally draw the boundaries of clusters which recursively group the vertices in a crossing-free manner. In the last years, the family of constrained planarity problems received a lot of attention in the field of graph drawing. Efficient algorithms were discovered for many of them, while a few others turned out to be NP-complete. In contrast to the extensive theoretical considerations and the direct motivation by applications, only very few of the found algorithms have been implemented and evaluated in practice. The goal of this thesis is to advance the research on both theoretical as well as practical aspects of constrained planarity. On the theoretical side, we consider two types of constrained planarity problems. The first type are problems that individually constrain the rotations of vertices, that is they restrict the counter-clockwise cyclic orders of the edges incident to vertices. We give a simple linear-time algorithm for the problem Partially Embedded Planarity, which also generalizes to further constrained planarity variants of this type. The second type of constrained planarity problem concerns more involved planarity variants that come down to the question whether there are embeddings of one or multiple graphs such that the rotations of certain vertices are in sync in a certain way. Clustered Planarity and a variant of the Simultaneous Embedding with Fixed Edges Problem (Connected SEFE-2) are well-known problems of this type. Both are generalized by our Synchronized Planarity problem, for which we give a quadratic algorithm. Through reductions from various other problems, we provide a unified modelling framework for almost all known efficiently solvable constrained planarity variants that also directly provides a quadratic-time solution to all of them. For both our algorithms, a key ingredient for reaching an efficient solution is the usage of the right data structure for the problem at hand. In this case, these data structures are the SPQR-tree and the PC-tree, which describe planar embedding possibilities from a global and a local perspective, respectively. More specifically, PC-trees can be used to locally describe the possible cyclic orders of edges around vertices in all planar embeddings of a graph. This makes it a key component for our algorithms, as it allows us to test planarity while also respecting further constraints, and to communicate constraints arising from the surrounding graph structure between vertices with synchronized rotation. Bridging over to the practical side, we present the first correct implementation of PC-trees. We also describe further improvements, which allow us to outperform all implementations of alternative data structures (out of which we only found very few to be fully correct) by at least a factor of 4. We show that this yields a simple and competitive planarity test that can also yield an embedding to certify planarity. We also use our PC-tree implementation to implement our quadratic algorithm for solving Synchronized Planarity. Here, we show that our algorithm greatly outperforms previous attempts at solving related problems like Clustered Planarity in practice. We also engineer its running time and show how degrees of freedom in the theoretical algorithm can be leveraged to yield an up to tenfold speed-up in practice. KW - Constrained Planarity KW - Clustered Planarity KW - Synchronized Planarity KW - Algorithm Engineering Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13817 ER - TY - RPRT A1 - Eckhardt, Dennis A1 - Freiling, Felix A1 - Herrmann, Dominik A1 - Katzenbeisser, Stefan A1 - Pöhls, Henrich C. T1 - Sicherheit in der Digitalisierung des Alltags: Definition eines ethnografisch-informatischen Forschungsfeldes für die Lösung alltäglicher Sicherheitsprobleme N2 - In den vergangenen Jahrzehnten hat es unübersehbar zahlreiche Fortschritte im Bereich der IT-Sicherheitsforschung gegeben, etwa in den Bereichen Systemsicherheit und Kryptographie. Es ist jedoch genauso unübersehbar, dass IT-Sicherheitsprobleme im Alltag der Menschen fortbestehen. Mutmaßlich liegt dies an der Komplexität von Alltagssituationen, in denen Sicherheitsmechanismen und Gerätefunktionalität sowie deren Heterogenität in schwer antizipierbarer Weise mit menschlichem Verständnis und Alltagsgebrauch interagieren. Um die wissenschaftliche Forschung besser auf Menschen und deren IT-Sicherheitsbedürfnisse auszurichten, müssen wir daher den Alltag der Menschen besser verstehen. Das Verständnis von Alltag ist in der Informatik jedoch noch unterentwickelt. Dieser Beitrag möchte das Forschungsfeld “Sicherheit in der Digitalisierung des Alltags” definieren, um Forschenden die Gelegenheit zu geben, ihre Anstrengungen in diesem Bereich zu bündeln. Wir machen dabei Vorschläge einerseits zur inhaltlichen Eingrenzung der informatischen Forschung. Andererseits möchten wir durch die Einbeziehung von Forschungsmethoden aus der Ethnografie, die Erkenntnisse aus der durchaus subjektiven Beobachtung des “Alltags” vieler einzelner Individuen zieht, zur methodischen Weiterentwicklung interdisziplinärer Forschung in diesem Feld beitragen. Die IT- Sicherheitsforschung kann dann Bestehendes gezielt für eine richtige Alltagstauglichkeit optimieren und neue grundlegende Sicherheitsfunktionalitäten für die konkreten Herausforderungen im Alltag entwickeln. KW - Alltagsdigitalisierung KW - Ethnografie KW - IT-Sicherheit KW - Interdisziplinarität Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13721 ER - TY - THES A1 - Danner, Dominik T1 - Towards Quality of Service and Fairness in Smart Grid Applications N2 - Due to the increasing amount of distributed renewable energy generation and the emerging high demand at consumer connection points, e. g., electric vehicles, the power distribution grid will reach its capacity limit at peak load times if it is not expensively enhanced. Alternatively, smart flexibility management that controls user assets can help to better utilize the existing power grid infrastructure for example by sharing available grid capacity among connected electric vehicles or by disaggregating flexibility requests to hybrid photovoltaic battery energy storage systems in households. Besides maintaining an acceptable state of the power distribution grid, these smart grid applications also need to ensure a certain quality of service and provide fairness between the individual participants, both of which are not extensively discussed in the literature. This thesis investigates two smart grid applications, namely electric vehicle charging-as-a-service and flexibility-provision-as-a-service from distributed energy storage systems in private households. The electric vehicle charging service allocation is modeled with distributed queuing-based allocation mechanisms which are compared to new probabilistic algorithms. Both integrate user constraints (arrival time, departure time, and energy required) to manage the quality of service and fairness. In the queuing-based allocation mechanisms, electric vehicle charging requests are packetized into logical charging current packets, representing the smallest controllable size of the charging process. These packets are queued at hierarchically distributed schedulers, which allocate the available charging capacity using the time and frequency division multiplexing technique known from the networking domain. This allows multiple electric vehicles to be charged simultaneously with variable charging currents. To achieve high quality of service and fairness among electric vehicle charging processes, dynamic weights are introduced into a weighted fair queuing scheduler that considers electric vehicle departure time and required energy for prioritization. The distributed probabilistic algorithms are inspired by medium access protocols from computer networking, such as binary exponential backoff, and control the quality of service and fairness by adjusting sampling windows and waiting periods based on user requirements. The second smart grid application under investigation aims to provide flexibility provision-as-a-service that disaggregates power flexibility requests to distributed battery energy storage systems in private households. Commonly, the main purpose of stationary energy storage is to store energy from a local photovoltaic system for later use, e. g., for overnight charging of an electric vehicle. This is optimized locally by a home energy management system, which also allows the scheduling of external flexibility requests defined by the deviation from the optimal power profile at the grid connection point, for example, to perform peak shaving at the transformer. This thesis discusses a linear heuristic and a meta heuristic to disaggregate a flexibility request to the single participating energy management systems that are grouped into a flexibility pool. Thereby, the linear heuristic iteratively assigns portions of the power flexibility to the most appropriate energy management system for one time slot after another, minimizing the total flexibility cost or maximizing the probability of flexibility delivery. In addition, a multi-objective genetic algorithm is proposed that also takes into account power grid aspects, quality of service, and fairness among par-ticipating households. The genetic operators are tailored to the flexibility disaggregation search space, taking into account flexibility and energy management system constraints, and enable power-optimized buffering of fitness values. Both smart grid applications are validated on a realistic power distribution grid with real driving patterns and energy profiles for photovoltaic generation and household consumption. The results of all proposed algorithms are analyzed with respect to a set of newly defined metrics on quality of service, fairness, efficiency, and utilization of the power distribution grid. One of the main findings is that none of the tested algorithms outperforms the others in all quality of service metrics, however, integration of user expectations improves the service quality compared to simpler approaches. Furthermore, smart grid control that incorporates users and their flexibility allows the integration of high-load applications such as electric vehicle charging and flexibility aggregation from distributed energy storage systems into the existing electricity distribution infrastructure. However, there is a trade-off between power grid aspects, e. g., grid losses and voltage values, and the quality of service provided. Whenever active user interaction is required, means of controlling the quality of service of users’ smart grid applications are necessary to ensure user satisfaction with the services provided. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13731 ER - TY - THES A1 - Brummer, Stephan T1 - Numerisch robuste Berechnung der zirkulären Sichtbarkeitsmenge N2 - Sichtbarkeitsprobleme, wie das Folgende, gehören zu den grundlegenden Problemen der algorithmischen Geometrie: Berechne zu einem einfachen Polygon, dem sogenannten Kanal, und zu einem darin enthaltenen Punkt die von diesem Punkt aus sichtbare Punktmenge. Dabei ist ein Punkt von einem anderen Punkt aus sichtbar, wenn deren Verbindungsstrecke den Kanal nicht verlässt. Wir wollen uns in dieser Arbeit mit zirkulärer Sichtbarkeit beschäftigen. Zur Verbindung zweier Punkte sind dann nicht nur Strecken, sondern auch Kreisbögen zulässig. Außerdem betrachten wir als Ausgangspunkt dieser sogenannten Sichtbarkeitskreisbögen und -strecken eine Kante des Kanals anstatt eines einzelnen Punkts. Konkret liefert diese Arbeit einen Beitrag zur numerisch robusten Bestimmung der zirkulären Sichtbarkeitsmenge ausgehend von einer Kante des Kanals. Hierfür wird in dieser Arbeit ein Algorithmus vorgestellt, mit dem für einen gegebenen Punkt festgestellt werden kann, ob dieser von der Startkante aus sichtbar ist. Im Fall eines sichtbaren Punkts wird ein Sichtbarkeitskreisbogen berechnet, der zwei Kanalberührungen besitzt. Damit kann der Algorithmus bei geeigneter Wahl des zu untersuchenden Punkts – der als dritte Kanalberührung fungiert – direkt zur Berechnung von sogenannten Grenzkreisbögen der Sichtbarkeitsmenge benutzt werden. Diese definieren den Rand der zirkulären Sichtbarkeitsmenge und zeichnen sich dadurch aus, dass sie vom Kanal dreimal abwechselnd von links und von rechts berührt werden. Der beschriebene Algorithmus basiert auf der Untersuchung derjenigen Kreisbögen, die zwar nicht notwendigerweise vollständig im Kanal liegen, aber die Startkante mit dem Punkt verbinden, dessen Sichtbarkeit bestimmt werden soll. Insbesondere werden dabei die Bereiche untersucht, in denen der jeweilige Kreisbogen den Kanal verlässt, die sogenannten Verletzungen. Da die „Schwere“ einer solchen Verletzung quantifizierbar ist, wird ein iteratives Vorgehen ermöglicht. Dabei wird der Kreisbogen iterativ so verändert, dass dieser bei gleichem Endpunkt den Kanal immer „weniger verlässt“. Ist der Endpunkt und damit der zu untersuchende Punkt nicht sichtbar, wird im Laufe des Algorithmus festgestellt, dass keine derartige Verbesserung möglich ist. Der vorgestellte Algorithmus ist numerisch robust, einfach umzusetzen und besitzt eine in der Anzahl der Kanalecken lineare Laufzeit. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12299 ER - TY - JOUR A1 - Herbold, Steffen A1 - Hautli‑Janisz, Annette A1 - Heuer, Ute A1 - Kikteva, Zlata A1 - Trautsch, Alexander T1 - A large‑scale comparison of human‑written versus ChatGPT‑generated essays JF - Scientific Reports N2 - ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13961 VL - 13 PB - Springer Nature ER - TY - JOUR A1 - Hassen, Wiem Fekih A1 - Ben Ahmed, Mariem T1 - Optimization of a Redox-Flow Battery Simulation Model Based on a Deep Reinforcement Learning Approach JF - Batteries N2 - Vanadium redox-flow batteries (VRFBs) have played a significant role in hybrid energy storage systems (HESSs) over the last few decades owing to their unique characteristics and advantages. Hence, the accurate estimation of the VRFB model holds significant importance in large-scale storage applications, as they are indispensable for incorporating the distinctive features of energy storage systems and control algorithms within embedded energy architectures. In this work, we propose a novel approach that combines model-based and data-driven techniques to predict battery state variables, i.e., the state of charge (SoC), voltage, and current. Our proposal leverages enhanced deep reinforcement learning techniques, specifically deep q-learning (DQN), by combining q-learning with neural networks to optimize the VRFB-specific parameters, ensuring a robust fit between the real and simulated data. Our proposed method outperforms the existing approach in voltage prediction. Subsequently, we enhance the proposed approach by incorporating a second deep RL algorithm—dueling DQN—which is an improvement of DQN, resulting in a 10% improvement in the results, especially in terms of voltage prediction. The proposed approach results in an accurate VFRB model that can be generalized to several types of redox-flow batteries. KW - energy storage KW - redox-flow battery KW - battery modeling KW - battery state variables KW - parameter optimization KW - accurate estimation KW - voltage prediction KW - deep reinforcement learning KW - deep q-learning KW - dueling deep q-networks Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13994 VL - 10 PB - MDPI CY - Basel ER - TY - JOUR A1 - Hassen, Wiem Fekih A1 - Imen Azzouz, Imen Azzouz T1 - Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach JF - Energies N2 - The worldwide adoption of Electric Vehicles (EVs) has embraced promising advancements toward a sustainable transportation system. However, the effective charging scheduling of EVs is not a trivial task due to the increase in the load demand in the Charging Stations (CSs) and the fluctuation of electricity prices. Moreover, other issues that raise concern among EV drivers are the long waiting time and the inability to charge the battery to the desired State of Charge (SOC). In order to alleviate the range of anxiety of users, we perform a Deep Reinforcement Learning (DRL) approach that provides the optimal charging time slots for EV based on the Photovoltaic power prices, the current EV SOC, the charging connector type, and the history of load demand profiles collected in different locations. Our implemented approach maximizes the EV profit while giving a margin of liberty to the EV drivers to select the preferred CS and the best charging time (i.e., morning, afternoon, evening, or night). The results analysis proves the effectiveness of the DRL model in minimizing the charging costs of the EV up to 60%, providing a full charging experience to the EV with a lower waiting time of less than or equal to 30 min. KW - smart EV charging KW - day-ahead planning KW - deep Q-Network KW - data-driven approach KW - waiting time KW - cost minimization KW - real dataset Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13985 VL - 16 PB - MDPI CY - Basel ER - TY - JOUR A1 - Patil, Amit A1 - Ghasemi, Abdorasoul A1 - de Meer, Hermann T1 - Analysis of protection blinding in active distribution grids JF - IET Renewable Power Generation N2 - Protection blinding is a challenging issue in renewables-penetrated distribution grids and refers to a situation where a circuit breaker may not trip due to fault current contribution from distributed generation. This research addresses how the distributed generation location and capacity impact the operation of the circuit breaker in terms of the response time of the circuit breakers. The relative electrical distances of the faults and distributed generation to the circuit breakers are considered. The impact of distributed generation capacity considering the fault location is characterized using a new index called the heterogeneity index. The electrical distance between distributed generations and circuit breakers and the electrical distance between fault and circuit breaker is considered by a second new index called the electrical distance ratio. Data analysis on simulation results shows that these indices capture the phenomena of protection blinding caused by distributed generation. Results show that a higher distributed generation penetration and faults that are electri cally further away from a circuit breaker show severe cases of protection blinding captured by the indices. Furthermore, it is demonstrated how these indices can identify the worst impacted locations in the distribution grid. A key result is that protection blinding does not necessarily occur solely due to the presence of distributed generation between a circuit breaker and a fault, but is dependent on factors such as distributed generation location in the distribution grid, fault level, fault level distribution across the generation units and fault location. KW - General & introductory electrical electronics engineering Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14661 ER - TY - THES A1 - Püllen, Dominik T1 - Holistic Security Engineering for Software-Defined Vehicles N2 - With the increasing use of digital technologies in the automotive sector, the traditional automobile is undergoing a structural transformation, requiring new technologies and enabling innovative mobility concepts. In particular, the ability to drive automatically or even fully autonomously, update control software, and remain connected to the environment allows attackers to infiltrate highly critical vehicle systems and take control without adequate protection. Once not only individual vehicles but entire fleets are dominated by software, cyberattacks could disrupt a significant portion of the infrastructure and expose passengers to substantial risks. This work follows a holistic approach to protecting highly automated software-defined vehicles from cyberattacks by designing and implementing security concepts in the main phases of a vehicle's lifecycle. We use SAE level 4 prototype vehicles to evaluate our proposed techniques. We start with a systematic security requirement analysis using the ISA-62443 standard series, demonstrating how threats can be identified in a collaborative, hierarchical process and how the resulting security risks impact the software and hardware architecture of a self-driving vehicle. We show how this analysis process results in concrete requirements whose consideration reduces the overall security risk to a tolerable level. Subsequently, we develop technical solutions for selected requirements. We begin by securing the CAN and FlexRay legacy protocols, which we foresee being used in specific areas of SDV in a transitional period despite technological changes. To enable vehicle-wide security management, we address the management and distribution of cryptographic keys within such networks, mainly focusing on resource-constrained devices. We propose using lightweight implicit certificates for deriving cryptographic group keys that can be used in CAN networks. Additionally, we demonstrate how the slot-based frame structure of the FlexRay protocol allows for efficient "multi-slot" authentication, for which we calculate cryptographic keys using hash-based key chains. SDV use Ethernet-based communication protocols and custom middleware stacks to transmit large amounts of data in real-time. We develop a three-stage security process for the novel ASOA, which enables the development and central orchestration of system-agnostic functional software components on embedded systems and HPC platforms. After the central specification of the security architecture at the data flow level, security tokens are automatically calculated and distributed for runtime protection of the service-oriented, DDS-based data transmission. Our process ensures the strict separation of function and system knowledge, allowing for cost-effective and adaptable security architecture management. The evaluation in four self-driving, software-defined vehicles demonstrates an average runtime overhead of approximately 5.71%. As the initial risk analysis and actual cyberattacks have shown, protective measures against the compromise of control units must be taken alongside communication security. To address this, we develop a method for verifying and validating the software integrity of control units. A governmental third party confirms a measurement through a digital certificate, proving the examined vehicle's trustworthiness and suitability for participation in automated traffic. In the final step of this work, we present an assessment scheme that allows software-defined vehicles to evaluate security incidents during operation in terms of their maximum expected damage and initiate appropriate countermeasures. We follow the ISO/SAE 21434 standard and model attack paths using a graph representing dependencies among internal vehicle assets to account for the propagation effects of cyberattacks. The assessment of a security incident considers not only the probability of individual attack paths but also the vehicle context. Our practical evaluation demonstrates that we can detect, report, and assess security incidents below the human reaction time in the earlier mentioned prototype vehicles. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14497 ER - TY - THES A1 - Alhamzeh, Alaa T1 - Language Reasoning by means of Argument Mining and Argument Quality N2 - Understanding of financial data has always been a point of interest for market participants to make better informed decisions. Recently, different cutting edge technologies have been addressed in the Financial Technology (FinTech) domain, including numeracy understanding, opinion mining and financial ocument processing. In this thesis, we are interested in analyzing the arguments of financial experts with the goal of supporting investment decisions. Although various business studies confirm the crucial role of argumentation in financial communications, no work has addressed this problem as a computational argumentation task. In other words, the automatic analysis of arguments. In this regard, this thesis presents contributions in the three essential axes of theory, data, and evaluation to fill the gap between argument mining and financial text. First, we propose a method for determining the structure of the arguments stated by company representatives during the public announcement of their quarterly results and future estimations through earnings conference calls. The proposed scheme is derived from argumentation theory at the micro-structure level of discourse. We further conducted the corresponding annotation study and published the first financial dataset annotated with arguments: FinArg. Moreover, we investigate the question of evaluating the quality of arguments in this financial genre of text. To tackle this challenge, we suggest using two levels of quality metrics, considering both the Natural Language Processing (NLP) literature of argument quality assessment and the financial era peculiarities. Hence, we have also enriched the FinArg data with our quality dimensions to produce the FinArgQuality dataset. In terms of evaluation, we validate the principle of ensemble learning on the argument identification and argument unit classification tasks. We show that combining a traditional machine learning model along with a deep learning one, via an integration model (stacking), improves the overall performance, especially in small dataset settings. In addition, despite the fact that argument mining is mainly a domain dependent task, to this date, the number of studies that tackle the generalization of argument mining models is still relatively small. Therefore, using our stacking approach and in comparison to the transfer learning model of DistilBert, we address and analyze three real-world scenarios concerning the model robustness over completely unseen domains and unseen topics. Furthermore, with the aim of the automatic assessment of argument strength, we have investigated and compared different (refined) versions of Bert-based models that incorporate external knowledge in the decision layer. Consequently, our method outperforms the baseline model by 13 ± 2% in terms of F1-score through integrating Bert with encoded categorical features. Beyond our theoretical and methodological proposals, our model of argument quality assessment, annotated corpora, and evaluation approaches are publicly available, and can serve as strong baselines for future work in both FinNLP and computational argumentation domains. Hence, directly exploiting this thesis, we proposed to the community, a new task/challenge related to the analysis of financial arguments: FinArg-1, within the framework of the NTCIR-17 conference. We also used our proposals to react to the Touché challenge at the CLEF 2021 conference. Our contribution was selected among the «Best of Labs». KW - NLP, Argument Mining, Argument Quality Assessment, Financial Argumentation, Earnings Conference Calls Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12699 ER - TY - THES A1 - Graßl, Isabella T1 - Diversity in Programming Education: Effects of Topic and Group Constellation on Young Programming Novices N2 - The field of software engineering faces a significant diversity crisis, characterized by a critical lack of heterogeneity despite ongoing efforts to promote gender equality. The persistent male dominance in this domain has created an urgent need for more heterogeneous groups in software engineering. This lack of diversity not only hinders underrepresented groups from entering the field but also prevents them from gaining initial programming experiences, which are a core component of software engineering and essential for developing computational thinking. To address this crisis and its implications, early interventions are key in shaping positive perceptions, building confidence, and sparking initial interest in programming among underrepresented groups before societal stereotypes of programming as a nerdy field manifests. This means starting with basic programming courses for children and continuing through to first-year university students in order to foster technical skills and computational thinking, alongside creativity and collaboration. However, there is limited understanding of how introductory programming course designs impact diversity-dependent characteristics to create welcoming and learning-friendly environments. This understanding is particularly important for underrepresented groups, especially girls, to benefit from their first programming experiences as they are often hindered by the initial perception of programming as (1) abstract and unappealing, and (2) non-social to novices. Engaging, creative, and relatable topics in programming courses might demystify complex programming concepts, making them more accessible, less intimidating, and appealing. However, understanding programming is not just about the content---it is also about the context in which it is learned. Introducing programming as social activity is important, particularly for young learners. By emphasizing team work, we might encourage collaboration and peer support, counteracting the lone-wolf programmer stereotype. Therefore, this doctoral thesis investigates the effects of both key aspects in programming courses---(1) topic choices and (2) group constellations---on young programming novices. The aim is to provide a holistic understanding of how different course designs can support diverse learners and promote gender equality in programming education. While this research primarily addresses gender diversity due to the persistent gender gap in software engineering, it also examines additional diversity dimensions, including age, ethnicity, prior programming experience, disabilities, and educational background. A total of 13 studies were conducted within this thesis, examining the current state of educational settings and utilizing various introductory programming courses designed for children aged 8 to 18, as well as first-year university students. These studies employed different programming environments, such as Scratch and Sonic Pi, and incorporated a variety of topics and group constellations to observe their effects on student outcomes. By using a mixed-methods design, data were gathered through surveys, observations, and both data-driven and manual code analysis. Key findings reveal that it is particularly noteworthy how children utilize the programming environment to engage with and creatively express topics aligned with their interests which also align mostly with gender-stereotypes, including elements from internet and popular culture as well as socio-cultural narratives. However, gender-sensitive and neutral topic choices enhance engagement, self-efficacy, contribution, code quality and creative output, while also contributing to reduce stereotypical beliefs about programming, particularly among girls. In line with the findings for the course topic, group constellations also influence programming experiences. In particular, introducing pair programming in courses shows a promising approach for young learners, but attention must be paid to mitigate socially learned gender-stereotypical behaviours. Another finding indicates that, unlike professional software teams, mixed-diverse student teams often encounter substantial challenges, thus benefit from clear communication guidelines and supportive environments to promote better collaboration. This doctoral thesis concludes with guidelines for designing more effective and inclusive introductory programming courses. These recommendations include using gender-sensitive course materials, allowing for creative freedom through topic choices while encouraging the use of advanced programming concepts, promoting collaboration through pair programming while fostering enhanced communication, boosting self-efficacy with quick positive feedback for girls in particular, and providing emotional support for underrepresented groups. By following these guidelines, educators can create more engaging, inclusive, and effective programming courses. This may ultimately promote a more equitable and diverse future generation of professional software developers while also fostering computational thinking, encouraging a broader interest in programming among all young learners. KW - Softwareentwicklung KW - Lernsituation Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-15049 ER - TY - THES A1 - Berger, Christian T1 - Towards Fast and Adaptive Byzantine State Machine Replication for Planetary-Scale Systems N2 - State machine replication (SMR) is a classical approach for building resilient distributed systems. In Byzantine fault-tolerant (BFT) systems, no concrete assumptions are made about the behavior of faulty replicas. With the advancement of distributed ledger technologies (DLT), planetary-scale BFT SMR ist becoming practical and necessary as it can serve as a consensus primitive to keep the ledger consistent. In our view, the alignment of BFT SMR to DLT brings new challenges, for instance the scalability aspect, where recent research works less frequently address latency improvements than throughput improvements. Further challenges include the geographic dispersion of replicas within a planetary-scale system and the need of a BFT SMR protocol to react to environmental changes during runtime. This thesis has the objective to improve BFT SMR for planetary-scale systems by lowering the protocol latency observed by clients and by making the BFT SMR system adaptive, i.e., enabling replicas to react to perceived changes such as changing network characteristics or faulty replicas. As a first contribution of this thesis, we discover that fast, consensus-free (read-only) operations is a flawed optimization in seminal BFT SMR frameworks, such as PBFT and BFT-SMaRt. We explain how the read-only optimization can violate the protocol's liveness by showing an attack and then present a solution that makes the overall, optimized protocol both live and linearizable. The second contribution is Adaptive Wide-Area Replication (AWARE), which enables a geo-replicated system to adapt to its environment, thus improving the geographical scalability of consensus if replicas are dispersed across the world. Essentially, AWARE is an automated and dynamic voting-weight tuning and leader positioning scheme, which supports the emergence of fast consensus quorums in the system and builds upon previous work, the WHEAT protocol. AWARE combines reliable self-monitoring with a consensus latency prediction model, thus striving to minimize the system’s consensus latency at runtime, which subsequently results in latency improvements observed by clients scattered across the globe, which we validate through experiments. The third contribution presents FlashConsensus, a protocol derived from AWARE, that also adjusts the resilience threshold. The core idea is the tentative use of a lower resilience threshold which leads to smaller consensus quorums and thus consensus acceleration in common-case scenarios where we expect only few faulty replicas. FlashConsensus achieves threat-level awareness through the incorporation of two modes of operation and BFT forensic support and guarantees liveness and linearizability under optimal resilience. Moreover, FlashConsensus allows for client-side speculation by using incremental consistency guarantees to further lower request latency. Additionally, we investigate on the question whether we can reason about the performance of large-scale systems utilizing simulations. We discover, that we can faithfully forecast the performance of BFT protocols by plugging real protocol implementations into a high-performance network simulator. For instance, simulation results reveal that, using 51 replicas scattered across the planet, FlashConsensus can finalize operations in less than 0.4 s, which is half of the time required for a PBFT-like protocol in the same network, and matching the latency of this protocol running on the best possible internet links (transmitting at 67% of the speed of light). N2 - Die Zustandsmaschinenreplikation (ZMR) ist ein klassischer Ansatz für zuverlässige verteilte Systeme. In Byzantinischen fehlertoleranten (BFT) Systemen werden keine konkreten Annahmen über das Verhalten von fehlerhaften Replikaten gemacht. Mit dem Voranschreiten der sog. „Distributed Ledger Technologien“ (DLT) wird planetare BFT ZMR praktikabel und notwendig, da sie als Konsensusprimitiv dienen kann, um die Konsistenz einer Blockchain aufrechtzuerhalten. Unserer Ansicht nach bringt die Ausrichtung von BFT ZMR an DLT neue Herausforderungen mit sich, z.B. den Skalierbarkeitsaspekt, bei dem jüngste Forschungsarbeiten seltener Latenzverbesserungen als Durchsatzverbesserungen untersuchen. Weitere Herausforderungen umfassen die geografische Verteilung von Replikaten innerhalb eines planetaren Systems sowie die Notwendigkeit eines BFT ZMR Protokolls während der Laufzeit auf Veränderungen zu reagieren. Diese Dissertation verfolgt das Ziel, die BFT ZMR für planetare Systeme durch Senkung der von Clients beobachteten Protokolllatenz zu verbessern und die BFT ZMR-Systeme anpassungsfähig zu machen, indem Replikate auf wahrgenommene Änderungen wie sich ändernde Netzwerkcharakteristiken oder fehlerhafte Replikate reagieren können. Als erster Beitrag dieser Dissertation zeigen wir, dass schnelle, konsensusfreie (nur-lesende) Operationen in grundlegenden BFT ZMR Protokollen, wie PBFT und BFT-SMaRt, eine fehlerhafte Optimierung sind. Wir erklären, wie die nur-lesende Optimierung die Liveness (Verfügbarkeit von Operationen) des Protokolls verletzen kann, indem wir einen Angriff präsentieren und dann eine Lösung vorschlagen, die das insgesamt optimierte Protokoll sowohl live (verfügbar) als auch linearisierbar („linearizable“ -- streng konsistent) macht. Der zweite Beitrag ist Adaptive Wide-Area Replication (AWARE), mit der ein geo-repliziertes System sich an seine Umgebung anpassen kann und damit die geografische Skalierbarkeit des Konsensus verbessert, wenn Replikate auf der ganzen Welt verteilt sind. Im Wesentlichen ist AWARE ein automatisches und dynamisches Stimmgewichtseinstellungs- und Anführerpositionierungs-schema, das die Entstehung schneller Konsensusquoren im System unterstützt und auf früheren Arbeiten, wie dem WHEAT-Protokoll, aufbaut. AWARE kombiniert zuverlässige Selbstüberwachung mit einem Konsensuslatenzvorhersagemodell und strebt danach, die Konsensuslatenz des Systems zur Laufzeit zu minimieren, was letztendlich zu Latenzgewinnen führt, die von Clients auf der ganzen Welt beobachtbar sind, was wir durch Experimente bestätigen. Der dritte Beitrag stellt FlashConsensus vor, ein Protokoll, das aus AWARE abgeleitet ist und auch die Resilienzschwelle anpasst. Die Kernidee ist die vorübergehende Verwendung einer niedrigeren Resilienzschwelle, die zu kleineren Konsensusquoren und damit zu einer Beschleunigung des Konsensus in häufigen Fällen führt, in denen nur wenige fehlerhafte Replikate erwartet werden. FlashConsensus erkennt und reagiert auf Bedrohungen durch die Einbeziehung von zwei Betriebsarten und BFT-Forensikunterstützung und garantiert Liveness sowie Konsistenz unter optimaler Resilienz. Darüber hinaus ermöglicht es die Spekulation auf Clientseite durch die Verwendung inkrementeller Konsistenzgarantien, um die Clientlatenz weiter zu senken. Zusätzlich werden wir uns mit der Frage beschäftigen, ob wir die Performanz von groß-skalierten Systemen mittels Simulationen beurteilen können. Wir stellen fest, dass wir die Performanz von BFT Protokollen durch das Einstöpseln von echten Protokollimplementierungen in einen hochleistungsfähigen Netzwerksimulator zuverlässig vorhersagen können. Beispielsweise zeigen Simulationen, dass FlashConsensus mit 51 Replikaten, die auf der ganzen Welt verteilt sind, Operationen in weniger als 0,4 s linearisierbar verarbeiten kann, was die Hälfte der Zeit ist, die für ein PBFT-ähnliches Protokoll im gleichen Netzwerk erforderlich ist und ähnlich schnell ist wie dieses Protokoll mit bestmöglichen Internetverbindungen (Übertragung mit 67% der Lichtgeschwindigkeit). KW - Byzantine fault tolerance KW - state machine replication KW - consensus KW - adaptiveness KW - planetary-scale Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-15059 ER - TY - THES A1 - Sentanoe, Stewart T1 - VMIaaS: Virtual Machine Introspection as a Service N2 - In this digital era, communication in the digital world is becoming part of our daily lives. One key technology that becomes part of this digital transformation is cloud computing. It allows users to have a running system as a virtual machine (VM) on the cloud without owning a physical server. Unfortunately, adversaries can also use those systems to conduct criminal activities. Therefore, developing a method to extract evidence from those systems is also necessary. One way is through digital forensics, and one method to do digital forensics of a VM is using virtual machine introspection (VMI). However, VMI has yet to be made available by any public cloud provider. This thesis addresses this issue by introducing methods for deploying VMI on public cloud providers. Four main challenges have to be solved. Firstly, VMI requires access to the hypervisor, which practically can access all VMs running on the same server. This leads to security and privacy issues where customers can introspect each other VMs. To solve this problem, this thesis introduces KVMIveggur, a versatile access control of VMI. It comes with different options that every customer can choose from based on their needs. Secondly, VMI introduces overhead to the running VM. This is because most of the introspection mechanisms perform data access to the monitored VM. Performing data access on a running VM can cause data inconsistency. Hence, pausing the VM before executing the data access is better. However, when the VM pausing frequency is high, it will affect the performance of the monitored VM. The current state-of-the-art techniques use caching to reduce the VM pausing frequency. However, it faces a problem: the cached data may be outdated compared to the actual data. Therefore, this thesis introduces VMIFresh, a better caching mechanism. We leverage both active and passive tracing mechanisms to ensure high performance and consistency of the data (freshness). Thirdly, many state-of-the-art VMI libraries and applications run perfectly only on Intel processors because Intel CPUs provide the best hardware support for VMI. However, AMD and ARM processors are getting more popular in cloud computing. Thus, it is necessary to retrofit VMI capabilities to support AMD and ARM processors. This thesis describes the requirements to employ VMI on AMD and ARM processors. We also provide the implementation of those requirements. Finally, to do introspection using VMI, it is crucial to have proper symbol information (layout and location of data structures) of the introspected operating system (OS) and user applications. While many existing VMI approaches concentrate primarily on analyzing OS data structures, analyzing user application data often receives no attention. In our approach, we address this gap by focussing on application-level introspection. We have identified several use cases that require this kind of introspection. We focus on cryptographic key extraction for two specific instances: secure shell (SSH) and transport layer security (TLS) by leveraging the power of machine learning techniques to locate those keys in the main memory effectively and efficiently. After we had solved those challenges, we combined a couple of our approaches and introduced two VMI applications: Sarracenia and VMIGuard. Sarracenia is a deception technology that tracks activities done on an SSH session. The main goal of Sarracenia is to attract adversaries away from the production system and learn about their behavior. On the other hand, VMIGuard also monitors the SSH traffic. But, it specifically monitors the activity of any git-related activities. The main goal of VMIGuard is to ensure the integrity of the hosted data from any internal malicious actor. KW - Cloud Computing KW - Computersicherheit Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-15027 ER - TY - JOUR A1 - Deiner, Adina A1 - Feldmeier, Patric A1 - Fraser, Gordon A1 - Schweikl, Sebastian A1 - Wang, Wengran T1 - Automated test generation for SCRATCH programs JF - Empirical Software Engineering N2 - The importance of programming education has led to dedicated educational program- ming environments, where users visually arrange block-based programming constructs that typically control graphical, interactive game-like programs. The SCRATCH programming environment is particularly popular, with more than 90 million registered users at the time of this writing. While the block-based nature of S CRATCH helps learners by preventing syntactical mistakes, there nevertheless remains a need to provide feedback and support in order to implement desired functionality. To support individual learning and classroom settings, this feedback and support should ideally be provided in an automated fashion, which requires tests to enable dynamic program analysis. In prior work we introduced W HISKER , a framework that enables automated testing of S CRATCH programs. However, creating these automated tests for S CRATCH programs is challenging. In this paper, we therefore investigate how to automatically generate W HISKER tests. Generating tests for S CRATCH raises important challenges: First, game-like programs are typically randomised, leading to flaky tests. Second, S CRATCH programs usually consist of animations and interactions with long delays, inhibiting the application of classical test generation approaches. Thus, the new application domain raises the question of which test generation technique is best suited to produce high coverage tests capable of detecting faulty behaviour. We investigate these questions using an extension of the W HISKER test framework for automated test generation. Evaluation on common programming exercises, a random sample of 1000 S CRATCH user programs, and the 1000 most popular S CRATCH programs demonstrates that our approach enables W HISKER to reliably accelerate test executions, and even though many SCRATCH programs are small and easy to cover, there are many unique challenges for which advanced search-based test generation using many-objective algorithms is needed in order to achieve high coverage. KW - Search-based testing KW - Block-based programming KW - SCRATCH Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2023091108301581209964 VL - 28 IS - 3 SP - 1 EP - 63 PB - Springer Nature CY - Berlin ER - TY - JOUR A1 - Schwartz, Niels T1 - Topology of closure systems in algebraic lattices JF - Algebra universalis N2 - Algebraic lattices are spectral spaces for the coarse lower topology. Closure systems in algebraic lattices are studied as subspaces. Connections between order theoretic properties of a closure system and topological properties of the subspace are explored. A closure system is algebraic if and only if it is a patch closed subset of the ambient algebraic lattice. Every subset X in an algebraic lattice P generates a closure system〈X〉P . The closure system〈Y 〉P generated by the patch closure Y of X is the patch closure of〈X〉P. If X is contained in the set of nontrivial prime elements of P then〈X〉P is a frame and is a coherent algebraic frame if X is patch closed in P. Conversely, if the algebraic lattice P is coherent then its set of nontrivial prime elements is patch closed. KW - Poset KW - Complete lattice KW - Algebraic lattice KW - Frame KW - Closure system KW - Closure operator KW - Spectral space KW - Specialization KW - Coarse lower topology KW - Scott topology KW - Patch topology Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2023090808111682406614 VL - 84 IS - 2 SP - 1 EP - 33 PB - Springer Nature CY - Berlin ER - TY - JOUR A1 - Münch, Miriam A1 - Rutter, Ignaz A1 - Stumpf, Peter T1 - Partial and Simultaneous Transitive Orientations via Modular Decompositions JF - Algorithmica N2 - A natural generalization of the recognition problem for a geometric graph class is the problem of extending a representation of a subgraph to a representation of the whole graph. A related problem is to find representations for multiple input graphs that coin- cide on subgraphs shared by the input graphs. A common restriction is the sunflower case where the shared graph is the same for each pair of input graphs. These problems translate to the setting of comparability graphs where the representations correspond to transitive orientations of their edges. We use modular decompositions to improve the runtime for the orientation extension problem and the sunflower orientation problem to linear time. We apply these results to improve the runtime for the partial represen- tation problem and the sunflower case of the simultaneous representation problem for permutation graphs to linear time. We also give the first efficient algorithms for these problems on circular permutation graphs. KW - Representation extension KW - Simultaneous representation KW - Comparability KW - graph KW - Permutation graph KW - Circular permutation graph KW - Modular decomposition Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2024022914111908687000 VL - 86 IS - 4 SP - 1263 EP - 1292 PB - Springer Nature CY - Berlin ER - TY - JOUR A1 - Trautsch, Alexander A1 - Herbold, Steffen A1 - Grabowski, Jens T1 - Are automated static analysis tools worth it? An investigation into relative warning density and external software quality on the example of Apache open source projects JF - Empirical Software Engineering N2 - Automated Static Analysis Tools (ASATs) are part of software development best practices. ASATs are able to warn developers about potential problems in the code. On the one hand, ASATs are based on best practices so there should be a noticeable effect on software quality. On the other hand, ASATs suffer from false positive warnings, which developers have to inspect and then ignore or mark as invalid. In this article, we ask whether ASATs have a measurable impact on external software quality, using the example of PMD for Java. We investigate the relationship between ASAT warnings emitted by PMD on defects per change and per file. Our case study includes data for the history of each file as well as the differences between changed files and the project in which they are contained. We investigate whether files that induce a defect have more static analysis warnings than the rest of the project. Moreover, we investigate the impact of two different sets of ASAT rules. We find that, bug inducing files contain less static analysis warnings than other files of the project at that point in time. However, this can be explained by the overall decreasing warning density. When compared with all other changes, we find a statistically significant difference in one metric for all rules and two metrics for a subset of rules. However, the effect size is negligible in all cases, showing that the actual difference in warning density between bug inducing changes and other changes is small at best. KW - Static code analysis KW - Quality evolution KW - Software metrics KW - Software quality Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2023091108203018898026 VL - 28 IS - 3 SP - 1 EP - 21 PB - Springer Nature CY - Berlin ER - TY - JOUR A1 - Frühwirth, Lorenz A1 - Prochno, Joscha T1 - Hölder’s inequality and its reverse — a probabilistic point of view JF - Mathematische Nachrichten N2 - In this article, we take a probabilistic look at Hölder's inequality, considering the ratio of terms in the classical Hölder inequality for random vectors in ℝ𝑛. We prove a central limit theorem for this ratio, which then allows us to reverse the inequality up to a multiplicative constant with high probability. The models of randomness include the uniform distribution on 𝓁𝑛𝑝 balls and spheres. We also provide a Berry–Esseen–type result and prove a large and a moderate deviation principle for the suitably normalized Hölder ratio. KW - Berry–Esseen bound KW - central limit theorem KW - Hölder’s inequality KW - 𝓁 𝑛 𝑝 ball KW - large deviation principle KW - moderate deviation principle KW - reverse inequality Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2023062315175042989607 VL - 296 IS - 12 SP - 5493 EP - 5512 PB - Wiley CY - Hoboken ER - TY - THES A1 - Hofstadler, Julian T1 - Qualitative and quantitative convergence results for randomised integration methods N2 - In this thesis different randomised integration methods based on either, randomised Quasi-Monte Carlo, or (adaptive) Markov chain Monte Carlo methods are studied. Depending on the underlying integration problem we show qualitative and quantitative results, which ensure the asymptotic correctness of an algorithm or provide explicit error bounds. The first problem we consider is Lebesgue integration in the unit cube. We prove that a class of structured randomised integration methods is consistent w.r.t. convergence in mean and probability for any integrable function. Under slightly stronger integrability conditions we show that one also has almost sure convergence for median modified methods. We demonstrate the applicability of our theoretical results by considering randomly shifted lattice rules, randomised (t,d)-sequences, Latin hypercube samples, and randomised Frolov points. Secondly, we study integration w.r.t. probability measures which are available only via their non-normalised density. In this context we investigate Markov chain Monte Carlo methods which satisfy a spectral gap condition and functions which do not need to have a finite second moment. We prove error bounds for the absolute mean error where the rate of convergence is optimal. Illustrative scenarios where our theory is applicable are the random walk Metropolis algorithm as well as slice samplers. Finally, we study so-called adaptive increasingly rare Markov chain Monte Carlo algorithms. Based on a simultaneous Wasserstein contraction assumption we estimate the mean squared error and also prove bounds which characterise the path-wise convergence of the estimator. To demonstrate the applicability of our results we consider a number of examples, among which are doubly intractable distributions. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-15196 ER - TY - THES A1 - Greifenstein, Luisa T1 - Supporting Primary School Programming Education through Formative Feedback N2 - Children are increasingly surrounded by computer science aspects in their everyday life. Primary school education aims at empowering children to participate in and reflect on their environment. Therefore, computer science related contents such as programming are increasingly introduced into primary school curricula. However, this also involves challenges in particular for teachers, who need to familiarise themselves with the new curriculum. As a result, primary school teachers often struggle to help primary school children with their programming issues. Corrective feedback given during the learning process (i. e. formative feedback) can help by promoting cognitive factors such as content knowledge. This thesis therefore aims to support primary school programming education through formative feedback. In order to shed light on different perspectives, both teachers and children participated in the studies in a mixed methods design. Teachers’ challenges and children’s programming issues were explored as a basis for knowing what both target groups struggle with. This was done by conducting content analysis on the challenges and issues collected and using the resulting categories for further quantitative analysis. The effects of different characteristics of feedback on the effectiveness and efficiency of teaching and learning programming were then explained. This was done by asking the teachers and children to explain their ratings and conducting content analysis on their explanations. The support of primary school programming education through formative feedback builds on the major challenge of teachers’ lack of content knowledge and the corresponding strategies of teacher training and automated feedback. Indeed, automated feedback was found to be mostly helpful for debugging and task creation. In order to provide direct support to children, common programming issues were identified in terms of the understanding of programming concepts and the usage of the programming environment. Effects on children’s learning and their preferences were identified for several feedback characteristics, leading for example to elaborated hints instead of simple direct instructions. Based on these results, a formative feedback approach of hint cards was developed and evaluated. The hint cards approach proved to be a useful example strategy for supporting primary school programming education through formative feedback. N2 - Kinder sind in ihrem Alltag zunehmend von informatischen Phänomenen umgeben. Die Grundschulbildung zielt darauf ab, Kinder zu befähigen, an ihrer Umwelt teilzuhaben und sie zu reflektieren. Daher werden informatikbezogene Inhalte wie das Programmieren zunehmend in die Lehrpläne der Grundschulen aufgenommen. Dies bringt jedoch auch Herausforderungen mit sich, insbesondere für die Lehrkräfte, die sich mit dem neuen Lehrplan und Ansätzen vertraut machen müssen. Infolgedessen fällt es Grundschullehrkräften oft schwer, Grundschulkindern bei ihren Programmierproblemen zu helfen. Korrektives Feedback, das während des Lernprozesses gegeben wird (formatives Feedback), kann helfen, indem es kognitive Faktoren wie fachliches Wissen fördert. Ziel dieser Arbeit ist es daher, den Programmierunterricht in der Grundschule durch formatives Feedback zu unterstützen. Um die unterschiedlichen Perspektiven zu beleuchten, nahmen sowohl Lehrkräfte als auch Kinder an den Studien in einem Mixed Methods-Design teil. Die Herausforderungen der Lehrkräfte und die Schwierigkeiten der Kinder bei der Programmierung wurden exploriert, um zu erfahren, womit beide Zielgruppen Probleme haben. Dazu wurden die gesammelten Herausforderungen und Probleme einer qualitativen Inhaltsanalyse unterzogen und die daraus resultierenden Kategorien für eine weitere quantitative Analyse verwendet. Außerdem wurden die Auswirkungen verschiedener Merkmale des Feedbacks auf die Effektivität und Effizienz des Lehrens und Lernens von Programmierkonzepten erklärt. Dazu erläuterten die Lehrkräfte und Kinder ihre Bewertungen und ihre Erläuterungen wurden einer qualitativen Inhaltsanalyse unterzogen. Die Unterstützung des Programmierunterrichts in der Grundschule durch formatives Feedback baut auf der großen Herausforderung des fehlenden fachlichen Wissens der Lehrkräfte und den entsprechenden Strategien der Lehrkräftebildung und des automatischen Feedbacks auf. Das automatische Feedback erwies sich beim Debugging und bei der Erstellung von Aufgaben als hilfreich. Um die Kinder direkt zu unterstützen, wurden allgemeine Programmierprobleme in Bezug auf das Verständnis von Programmierkonzepten und die Nutzung der Programmierumgebung ermittelt. Es wurden Auswirkungen auf das Lernen der Kinder und ihre Präferenzen für verschiedene Feedback-Merkmale ermittelt, was beispielsweise in elaborierten Hinweisen anstelle einfacher direkter Anweisungen resultiert. Auf der Grundlage dieser Ergebnisse wurde ein formatives Feedbackkonzept mit Hinweiskärtchen entwickelt und evaluiert. Dieses Konzept erwies sich als nützliche Strategie zur Unterstützung des Programmierunterrichts in der Grundschule durch formatives Feedback. KW - feedback KW - programming education KW - primary school education KW - Rückmeldung KW - Grundschulunterricht KW - Algorithmische Programmierung Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-15188 ER - TY - THES A1 - Welearegai, Gebrehiwet Biyane T1 - Precise Detection of Injection Attacks in Real-world Applications N2 - Code injection attacks like the one used in the high-profile 2017 Equifax breach, have become increasingly common, ranking at the top of OWASP’s list of critical web application vulnerabilities. The injection attacks can also target embedded applications running on processors like ARM and Xtensa by exploiting memory bugs and maliciously altering the program’s behavior or even taking full control over a system. Especially, ARM’s support of low power consumption without sacrificing performance is leading the industry to shift towards ARM processors, which advances the attention of injection attacks as well. In this thesis, we are considering web applications and embedded applications (running on ARM and Xtensa processors) as the target of injection attacks. To detect injection attacks in web applications, taint analysis is mostly proposed but the precision, scalability, and runtime overhead of the detection depend on the analysis types (e.g., static vs dynamic, sound vs unsound). Moreover, in the existing dynamic taint tracking approach for Java- based applications, even the most performant can impose a slowdown of at least 10–20% and often far more. On the other hand, considering the embedded applications, while some initial research has tried to detect injection attacks (i.e., ROP and JOP) on ARM, they suffer from high performance or storage overhead. Besides, the Xtensa has been neglected though used in most firmware-based embedded WiFi home automation devices. This thesis aims to provide novel approaches to precisely detect injection attacks on both the web and embedded applications. To that end, we evaluate JavaScript static analysis frameworks to evaluate the security of a hybrid app (JS & native) from an industrial partner, provide RIVULET – a tool that precisely detects injection attacks in Java-based real-world applications, and investigate injection attacks detection on ARM and Xtensa platforms using hardware performance counters (HPCs) and machine learning (ML) techniques. To evaluate the security of the hybrid application, we initially compare the precision, scalability, and code coverage of two widely-used static analysis frameworks—WALA and SAFE. The result of our comparison shows that SAFE provides higher precision and better code coverage at the cost of somewhat lower scalability. Based on these results, we analyze the data flows of the hybrid app via taint analysis by extending the SAFE’s taint analysis and detected a potential for injection attacks of the hybrid application. Similarly, to detect injection attacks in Java-based applications, we provide Rivulet which monitors the execution of developer-written functional tests using dynamic taint tracking. Rivulet uses a white-box test generation technique to re-purpose those functional tests to check if any vulnerable flow could be exploited. We compared Rivulet to the state-of-the-art static vulnerability detector Julia on benchmarks and Rivulet outperformed Julia in both false positives and false negatives. We also used Rivulet to detect new vulnerabilities. Moreover, for applications running on ARM and Xtensa platforms, we investigate ROP1 attack detection by combining HPCs and ML techniques. We collect data exploiting real- world vulnerable applications and small benchmarks to train the ML. For ROP attack detection on ARM, we also implement an online monitor which labels a program’s execution as benign or under attack and stops its execution once the latter is detected. Evaluating our ROP attack detection approach on ARM provides a detection accuracy of 92% for the offline training and 75% for the online monitoring. Similarly, our ROP attack detection on the firmware-only Xtensa processor provides an overall average detection accuracy of 79%. Last but not least, this thesis shows how relevant taint analysis is to precisely detect injection attacks on web applications and the power of HPC combined with machine learning in the control flow injection attacks detection on ARM and Xtensa platforms. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12926 ER - TY - JOUR A1 - Ghosh Dastidar, Kanishka A1 - Jurgovsky, Johannes A1 - Siblini, Wissam A1 - Granitzer, Michael T1 - NAG: neural feature aggregation framework for credit card fraud detection JF - Knowledge and Information Systems N2 - The state-of-the-art feature-engineering method for fraud classification of electronic pay-ments uses manually engineered feature aggregates, i.e., descriptive statistics of thetransaction history. However, this approach has limitations, primarily that of being dependenton expensive human expert knowledge. There have been attempts to replace manual aggre-gation through automatic feature extraction approaches. They, however, do not consider thespecific structure of the manual aggregates. In this paper, we define the novel Neural Aggre-gate Generator (NAG), a neural network-based feature extraction module that learns featureaggregates end-to-end on the fraud classification task. In contrast to other automatic featureextraction approaches, the network architecture of the NAG closely mimics the structureof feature aggregates. Furthermore, the NAG extends learnable aggregates over traditionalones through soft feature value matching and relative weighting of the importance of differ-ent feature constraints. We provide a proof to show the modeling capabilities of the NAG.We compare the performance of the NAG to the state-of-the-art approaches on a real-worlddataset with millions of transactions. More precisely, we show that features generated with theNAG lead to improved results over manual aggregates for fraud classification, thus demon-strating its viability to replace them. Moreover, we compare the NAG to other end-to-endapproaches such as the LSTM or a generic CNN. Here we also observe improved results. Weperform a robust evaluation of the NAG through a parameter budget study, an analysis of theimpact of different sequence lengths and also the predictions across days. Unlike the LSTMor the CNN, our approach also provides further interpretability through the inspection of itsparameters. KW - finance KW - credit card fraud KW - representation learning Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2022060219175235764329 VL - 2022 IS - 64 SP - 831 EP - 858 PB - Springer Nature CY - Berlin ER - TY - THES A1 - Patil, Amit Dilip T1 - Towards Resilient Protection of Interconnected ICT and Power Systems N2 - Due to the increasing number of distributed renewable energy sources, the distribution grid faces new operational challenges. Information and Communication Technology (ICT) systems can resolve these challenges through grid services that use automation, monitoring, and real-time decision-making, helping maintain an acceptable operational state of the distribution grid. However, the reliance of the power system on the ICT system and vice versa in the so-called smart grid creates interdependencies between the systems, which present new pathways for failure propagation. Therefore, these interdependencies require special attention to ensure stable system operation in the face of these challenges. However, these interdependencies have not been studied extensively in the literature. This thesis investigates approaches to model, quantify and improve the performance and resilience of the smart grid infrastructure. The interdependencies are formalised as interconnectors, entities that exist in all the connected systems. These interconnectors consist of components from both systems, where the components are modelled as state variables. These state variables determine the interconnector state and the service delivered. Failures represented by a change in state variables may impact the state. These state variables are deployed in a discrete event simulation framework to determine the system performance over time. The simulation result is represented on a two-dimensional state-space diagram depicting the operational state and service delivered. This allows for the resilience analysis of a system under various scenarios. The interdependencies are further investigated by exploring the role of ICT-based grid services in power grids, whose state is defined based on ICT properties, such as latency. These properties are formalised using property graphs. The ICT properties obtained from these graphs are used to parameterise a finite state automaton model of a grid service states, which are then used to determine the state of the entire smart grid. Case studies of state estimation and adaptive protection highlight the application of this approach. The use of ICT in state estimation allows the distinction of a global and perceived view of the power grid, which influences decision-making in the face of challenges. This thesis further investigates the protection system in detail. The overcurrent protection system is adversely impacted by distributed generation, resulting in undesired phenomena such as protection blinding. This thesis characterises this phenomenon by proposing two indices that capture the protection trip time under the influence of distributed generation. These indices consider the electrical distance between faults, protection and distributed generation. These indices and simulation results identify the worst-impacted locations in the power grid in terms of protection trip time. They also identify fault locations under given assumptions that do not cause protection blinding. ICT can resolve protection blinding by adapting the sensitivity of protection relays. However, since faults must be cleared in a short timeframe, communication delays may adversely impact the fault-clearing time. A discrete event simulation model is proposed to study protection performance in distribution grids. Investigation of time distribution assumptions reveals that the lognormal distribution accurately captures the circuit breaker trip time. The impact of the distributed generation and communication delay on the protection system is determined by measuring fault-clearing times using discrete event simulation. Results show that for the system studied, protection blinding is critical for low impedance faults in grids with high fault levels, while high impedance faults are critical in grids with low fault levels. Moreover, sympathetic tripping is seen at increased distribution grid fault levels and fault impedance. Furthermore, while communication systems reduce fault clearing times, increased delays harm protection systems. Finally, communication system components like sensors can fail, preventing fault detection. This thesis proposes a genetic algorithm-based approach to optimally place redundant sensors, minimising protection blinding under communication uncertainty within a redundancy budget. Results demonstrate the algorithm's effectiveness in optimising redundant sensor locations, reducing system costs, and improving fault tolerance. For the system and scenarios investigated, an average of 60% redundant sensors are relocated, reducing the average protection trip time by 36.65% compared to a baseline approach that does not consider communication uncertainty. This encourages incorporating communication component failure considerations in power system planning. KW - information and communication technology KW - smart grid KW - discrete event simulation KW - property graphs KW - protection blinding KW - genetic algorithm Y1 - 2025 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-16088 ER - TY - JOUR A1 - Fekih Hassen, Wiem A1 - Challouf, Maher T1 - Long short-term renewable energy sources prediction for grid-management systems based on stacking ensemble model JF - Energies N2 - The transition towards sustainable energy systems necessitates effective management of renewable energy sources alongside conventional grid infrastructure. This paper presents a comprehensive approach to optimizing grid management by integrating Photovoltaic (PV), wind, and grid energies to minimize costs and enhance sustainability. A key focus lies in developing an accurate scheduling algorithm utilizing Mixed Integer Programming (MIP), enabling dynamic allocation of energy resources to meet demand while minimizing reliance on cost-intensive grid energy. An ensemble learning technique, specifically a stacking algorithm, is employed to construct a robust forecasting pipeline for PV and wind energy generation. The forecasting model achieves remarkable accuracy with a Root Mean Squared Error (RMSE) of less than 0.1 for short-term (15 min and one day ahead) and long-term (one week and one month ahead) predictions. By combining optimization and forecasting methodologies, this research contributes to advancing grid management systems capable of harnessing renewable energy sources efficiently, thus facilitating cost savings and fostering sustainability in the energy sector. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14649 VL - 2024 IS - 17(13) ER - TY - JOUR A1 - Kaiser, Tobias T1 - Growth of log-analytic functions JF - Archiv der Mathematik N2 - We show that unary log-analytic functions are polynomially bounded. In the higher dimensional case, globally a log-analytic function can have exponential growth. We show that a log-analytic function is polynomially bounded on a definable set which contains the germ of every ray at infinity. KW - Log-analytic functions KW - Polynomially bounded KW - Exponential growth Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2023091614564557203657 VL - 120 IS - 6 SP - 605 EP - 614 PB - Springer Nature CY - Berlin ER - TY - JOUR A1 - Kaiser, Tobias T1 - Periods, power series, and integrated algebraic numbers JF - Mathematische Annalen N2 - Periods are defined as integrals of semialgebraic functions defined over the rationals. Periods form a countable ring not much is known about. Examples are given by taking the antiderivative of a power series which is algebraic over the polynomial ring over the rationals and evaluate it at a rational number. We follow this path and close these algebraic power series under taking iterated antiderivatives and nearby algebraic and geometric operations. We obtain a system of rings of power series whose coefficients form a countable real closed field. Using techniques from o-minimality we are able to show that every period belongs to this field. In the setting of o-minimality we define exponential integrated algebraic numbers and show that exponential periods and the Euler constant is an exponential integrated algebraic number. Hence they are a good candiate for a natural number system extending the period ring and containing important mathematical constants. KW - algebraic geometry KW - algebraic topology KW - algebra KW - associative rings and algebras KW - commutative rings and algebras KW - number theory Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2024040910285674698609 SN - 0025-5831 SN - 1432-1807 VL - 390 IS - 2 SP - 2043 EP - 2074 PB - Springer Berlin Heidelberg CY - Berlin/Heidelberg ER - TY - THES A1 - Liang, Hanning T1 - Deflectometric Measurement of the Topography of Reflecting Freeform Surfaces in Motion N2 - Measuring the topography of specular surfaces with strong surface structures in motion was impossible before this research. A new method based on singleshot phase-measuring de ectometry (SSPMD) and combining different solution aspects has been presented. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11672 ER - TY - JOUR A1 - Becher, Stefan A1 - Gerl, Armin ED - Sarne, Giuseppe Maria Luigi ED - Ma, Jianhua ED - Rosaci, Domenico ED - Srivastava, Gautam T1 - ConTra Preference Language: Privacy Preference Unification via Privacy Interfaces JF - Sensors N2 - After the enactment of the GDPR in 2018, many companies were forced to rethink their privacy management in order to comply with the new legal framework. These changes mostly affect the Controller to achieve GDPR-compliant privacy policies and management.However, measures to give users a better understanding of privacy, which is essential to generate legitimate interest in the Controller, are often skipped. We recommend addressing this issue by the usage of privacy preference languages, whereas users define rules regarding their preferences for privacy handling. In the literature, preference languages only work with their corresponding privacy language, which limits their applicability. In this paper, we propose the ConTra preference language, which we envision to support users during privacy policy negotiation while meeting current technical and legal requirements. Therefore, ConTra preferences are defined showing its expressiveness, extensibility, and applicability in resource-limited IoT scenarios. In addition, we introduce a generic approach which provides privacy language compatibility for unified preference matching. KW - privacy KW - preference language KW - legal factors KW - GDPR KW - usability Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11218 SN - 1424-8220 VL - 22 IS - 14 PB - MDPI CY - Basel, Switzerland ER - TY - THES A1 - Lachat, Paul T1 - Detecting Inference Attacks Involving Sensor Data N2 - The collection of personal information by organizations has become increasingly essential for social interactions. Nevertheless, according to the GDPR (General Data Protection Regulation), the organizations have to protect collected data. Access Control (AC) mechanisms are traditionally used to secure information systems against unauthorized access to sensitive data. The increased availability of personal sensor data, thanks to IoT-oriented applications, motivates new services to offer insights about individuals. Consequently, data mining algorithms have been proposed to infer personal insights from collected sensor data. Although they can be used for genuine purposes, attackers can leverage those outcomes, combining them with other type of data, and further breaching individuals’ privacy. Thus, bypassing AC mechanisms thanks to such insights is a concrete problem. We propose an inference detection system based on the analysis of queries issued on a sensor database. The knowledge obtained through these queries, and the inference channels corresponding to the use of data mining algorithms on sensor data to infer individual information, are described using Raw sensor data based Inference ChannEl Model (RICE-M). The detection is carried out by RICE-M based inference detection System (RICE-Sy). RICE-Sy considers at the time of the query, the knowledge that a user obtains via a new query and has obtained via his query history, and determines whether this is sufficient to allow that user to operate a channel. Thus, privacy protection systems can take advantage of the inferences detected by RICE-Sy, taking into account individuals’ information obtained by the attackers via a database of sensors, to further protect these individuals. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14149 ER - TY - THES A1 - Schlenker, Florian T1 - Delaunay Configuration B-Splines N2 - The generalization of univariate splines to higher dimensions is not straightforward. There are different approaches, each with its own advantages and drawbacks. A promising approach using Delaunay configurations and simplex splines is due to Neamtu. After recalling fundamentals of univariate splines, simplex splines, and the wellknown, multivariate DMS-splines, we address Neamtu’s DCB-splines. He defined two variants that we refer to as the nonpooled and the pooled approach, respectively. Regarding these spline spaces, we contribute the following results. We prove that, under suitable assumptions on the knot set, both variants exhibit the local finiteness property, i.e., these spline spaces are locally finite-dimensional and at each point only a finite number of basis candidate functions have a nonzero value. Additionally, we establish a criterion guaranteeing these properties within a compact region under mitigated assumptions. Moreover, we show that the knot insertion process known from univariate splines does not work for DCB-splines and reason why this behavior is inherent to these spline spaces. Furthermore, we provide a necessary criterion for the knot insertion property to hold true for a specific inserted knot. This criterion is also sufficient for bivariate, nonpooled DCB-splines of degrees zero and one. Numerical experiments suggest that the sufficiency also holds true for arbitrary spline degrees. Univariate functions can be approximated in terms of splines using the Schoenberg operator, where the approximation error decreases quadratically as the maximum distance between consecutive knots is reduced. We show that the Schoenberg operator can be defined analogously for both variants of DCB-splines with a similar error bound. Additionally, we provide a counterexample showing that the basis candidate functions of nonpooled DCB-splines are not necessarily linearly independent, contrary to earlier statements in the literature. In particular, this implies that the corresponding functions are not a basis for the space of nonpooled DCB-splines. N2 - Univariate Splines können nicht unmittelbar auf mehrere Dimensionen verallgemeinert werden. Jedoch gibt es verschiedene Ansätze mit jeweils unterschiedlichen Vor- und Nachteilen. Eine vielversprechende Herangehensweise, die Delaunay-Konfigurationen und Simplex-Splines verwendet, stammt von Neamtu. Nachdem wir die Grundlagen von univariaten Splines, Simplex-Splines und den bekannten multivariaten DMS-Splines wiederholt haben, beschäftigen wir uns mit Neamtus DCB-Splines. Er führte zwei verschiedene Varianten ein, die als nichtaggregierter beziehungsweise aggregierter Ansatz bezeichnet werden. In Bezug auf diese Splineräume präsentieren wir die folgenden Ergebnisse. Wir zeigen zum einen, dass beide Varianten unter geeigneten Voraussetzungen an die Knotenmenge die sogenannte Lokale-Endlichkeits-Eigenschaft besitzen. Dies bedeutet, dass die Splineräume lokal endlichdimensional sind und dass an jedem Punkt nur eine endliche Anzahl der Kandidaten an Basisfunktionen einen von null verschiedenen Wert aufweist. Zusätzlich ermitteln wir ein Kriterium, welches diese Eigenschaften auf einem kompakten Gebiet auch unter schwächeren Voraussetzungen garantiert. Darüber hinaus zeigen wir, dass der von den univariaten Splines her bekannte Prozess des Knoteneinfügens für DCB-Splines nicht funktioniert, und begründen, warum dieses Verhalten in der Natur dieser Splineräume liegt. Außerdem geben wir ein notwendiges Kriterium dafür an, dass die Knoteneinfüge-Eigenschaft für einen bestimmten einzufügenden Knoten gegeben sein kann. Für bivariate nicht-aggregierte DCB-Splines von Grad null und eins ist dieses Kriterium auch hinreichend. Numerische Experimente legen ferner die Vermutung nahe, dass dies unabhängig vom Splinegrad der Fall ist. Univariate Funktionen können mithilfe des Schoenberg-Operators durch Splines approximiert werden. Dabei hat eine Verringerung des maximalen Abstands zweier aufeinanderfolgender Knoten eine quadratische Verringerung des Approximationsfehlers zur Folge. Wir zeigen, dass der Schoenberg-Operator für beide Variaten von DCB-Splines auf analoge Art und Weise und mit einer ähnlichen Fehlerschranke definiert werden kann. Zusätzlich geben wir ein Gegenbeispiel an, das zeigt, dass die Basisfunktions-Kandidaten der nicht-aggregierten DCB-Splines nicht notwendigerweise linear unabhängig sind, was einen Gegensatz zu früheren Behauptungen in der Literatur darstellt. Dies impliziert insbesondere, dass die entsprechenden Funktionen keine Basis für den Raum der nicht-aggregierten DCB-Splines bilden. KW - Multivariate splines KW - Simplex splines KW - Delaunay triangulations KW - Delaunay configurations KW - Neamtu, Marian KW - Spline KW - Bivariater Spline KW - Spline-Raum KW - Simplexspline KW - Neamtu, Marian Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11225 ER - TY - THES A1 - Stier, Julian T1 - Structure of Artificial Neural Networks : Empirical Investigations N2 - Within one decade, Deep Learning overtook the dominating solution methods of countless problems of artificial intelligence. "Deep" refers to the deep architectures with operations in manifolds of which there are no immediate observations. For these deep architectures some kind of structure is pre-defined -- but what is this structure? With a formal definition for structures of neural networks, neural architecture search problems and solution methods can be formulated under a common framework. Both practical and theoretical questions arise from closing the gap between applied neural architecture search and learning theory. Does structure make a difference or can it be chosen arbitrarily? This work is concerned with deep structures of artificial neural networks and examines automatic construction methods under empirical principles to shed light on to the so called ``black-box models''. Our contributions include a formulation of graph-induced neural networks that is used to pose optimisation problems for neural architecture. We analyse structural properties for different neural network objectives such as correctness, robustness or energy consumption and discuss how structure affects them. Selected automation methods for neural architecture optimisation problems are discussed and empirically analysed. With the insights gained from formalising graph-induced neural networks, analysing structural properties and comparing the applicability of neural architecture search methods qualitatively and quantitatively we advance these methods in two ways. First, new predictive models are presented for replacing computationally expensive evaluation schemes, and second, new generative models for informed sampling during neural architecture search are analysed and discussed. KW - neural architecture KW - deep learning KW - graph induced neural networks Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14968 ER - TY - THES A1 - Juhos, Michael T1 - Probabilistic and geometric aspects of classical and non-commutative lp-type spaces in high dimensions N2 - This cumulative dissertation contains selected contributions to the field of asymptotic geometric analysis and high-dimensional probability. It is divided into two chapters: Chapter 1 explains some of the necessary theoretical background. In Section 1.1 it first gives a very concise history of asymptotic geometric analysis in general and then of the objects under study in particular, setting out some cornerstones in the discovery of the functional-analytic, geometric, and probabilistic properties of the spaces under consideration. The next section (1.2) gives the precise definitions and very basic properties of the three lp-type spaces that play a role in the contributed articles: the classical lp-sequence spaces, the mixed-norm sequence spaces, and the Schatten-classes Sp, each in its infinite- and finite-dimensional version. Section 1.3 is dedicated to the interplay between geometry and probability, expounding the general idea, introducing a few of the common tools, and exemplifying these on two kinds of limit theorems: Schechtman-Schmuckenschläger-type results and Poincaré-Maxwell-Borel lemmas. The first chapter concludes with Section 1.4, addressing a small sample of open questions pertaining to the contributed articles which are not answered in said articles and may be the interest of future research. The entirety of Chapter 2 consists of the contributed articles. N2 - Diese kumulative Dissertation enthält ausgewählte Beiträge zum Gebiet der asymptotischen geometrischen Analyse und der hochdimensionalen Wahrscheinlichkeit. Sie ist in zwei Kapitel geteilt: Kapitel 1 erklärt ein wenig des notwendigen theoretischen Hintergrunds. Abschnitt 1.1 bringt eine sehr kurz gefasste Geschichte der asymptotischen geometrischen Analyse im Allgemeinen und der Studienobjekte im Speziellen und legt einige Eckpunkte der Entdeckung der funktionalanalytischen, geometrischen und probabilistischen Eigenschaften der betrachteten Räume dar. Der nächste Abschnitt (1.2) gibt die genauen Definitionen und sehr grundlegenden Eigenschaften jener drei lp-artigen Räume, die in den beigetragenen Artikeln eine Rolle spielen: die klassischen lp-Folgenräume, die Folgenräume mit gemischter Norm und die Schattenklassen Sp, jeweils in der unendlich- und der endlichdimensionalen Version. Abschnitt 1.3 ist dem Zusammenspiel von Geometrie und Wahrscheinlichkeit gewidmet, er erörtert die allgemeine Idee, stellt einige der gebräuchlichen Werkzeuge vor und gibt zwei Beispiele von Grenzwertsätzen dazu: Schechtman-Schmuckenschläger-artige Ergebnisse und Poincaré-Maxwell-Borel-Lemmas. Das erste Kapitel schließt mit Abschnitt 1.4, der einen kleinen Satz von offenen Fragen anspricht, die zu den beigetragenen Artikeln gehören, darin aber nicht beantwortet werden und die das Interesse künftiger Forschung sein mögen. Kapitel 2 besteht zur Gänze aus den beigetragenen Artikeln. KW - functional analysis KW - asymptotic geometric analysis KW - limit theorems KW - lp-space KW - Schatten class Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14857 ER - TY - THES A1 - Auer, Michael T1 - Improving Automated Android Test Generation N2 - Mobile apps are nowadays the preferred means to accomplish ubiquitous tasks like messaging, e-commerce and even playing games. Often, there exist multiple apps for the same purpose, and it is the choice of the end user to pick an appropriate app. Apps that behave unexpected, e.g., crash frequently, are sooner or later replaced, which isundesirable for the companies developing such apps. Thus, it is essential to tests apps properly before they are released onto the market. However, testing manually is often not only too cost-intensive but also too time-consuming in the short development phase, thus an automated solution is preferred. Testing mobile apps automatically received increased attention in the last decade from primarily people in academia, and several testing techniques evolved. One technique that yielded promising results, especially in different domains, is search-based software testing in which a metaheuristic, e.g., a genetic algorithm, is applied to solve an optimisation problem, e.g., test generation. A main objective of test generation is to produce tests that reveal as many faults as possible. This in turn requires the generation of tests that deeply explore the tested app. The core metric to quantify how much code tests cover is the measurement of code coverage, which can be computed at different levels of granularity ranging from determining the fraction of covered activities to a very fine-grained measurement that calculates the percentage of covered lines. This coverage information is then often used to guide the search of the employed metaheuristic. However, current automated test generation approaches produce tests with a rather low code coverage. Thus, a substantial part of tested apps remains unexplored, which in turn misses revealing deeply residing faults. We identified three core issues that are directly related to the generation of low-coverage tests. First, the applicability of current test generators is often limited. This comprises the fact that current state-of-the-art code coverage tools are incapable of instrumenting a substantial number of apps and consequently, test generators cannot utilise detailed coverage information during exploration. In addition, test generators are often only equipped with a primitive set of actions that are insufficient to simulate system events and complex user inputs. Second, the test execution is extremely time-consuming. This includes among other things the overhead associated with executing individual actions, intermediate restart operations as well as fitness evaluations. Since search-based algorithms require a substantial number of test executions to play out their strengths, the slow test execution impedes the effectiveness of the search. Third, the guidance offered by search-based algorithms is often hampered by applying inadequate fitness functions or by using non-representation-specific variation operators. In this thesis we address the problem of low-coverage tests in the Android domain by proposing several enhancements for the three identified core issues. Concerning the applicability problem, we provide the implementation of a robust code coverage tool that is capable of measuring coverage at different levels of granularity and requires no access to the source code. We also propose to include actions that can simulate system events as well as complex user inputs. Regarding the performance issue, we suggest the integration of a surrogate model that is capable of predicting the outcome of individual actions or complete tests over time in order to reduce the overall test execution costs. With respect to the lack of guidance offered by traditional search-based algorithms, we suggest alternative search strategies. In the case of a deceptive fitness landscape, we propose using novelty search algorithms. Alternatively, we suggest utilising estimation of distribution algorithms that require no crossover or mutation perators to sample new tests. While all those enhancements had a positive impact on the Android test generation process, the individual empirical studies highlighted that further research is necessary to unleash the full power of the proposed search-based algorithms. In particular, exploring complex user interfaces meaningfully requires more attention whether by introducing additional actions or by extracting valuable hints to infer reasonable text inputs. In addition, the guidance offered by fitness functions is often limited because they are either designed too coarse at all or do not accurately reflect the search objectives. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-14955 ER - TY - JOUR A1 - Lengler, Johannes A1 - Opris, Andre A1 - Sudholt, Dirk T1 - Analysing Equilibrium States for Population Diversity JF - Algorithmica (ISSN: 1432-0541) N2 - Population diversity is crucial in evolutionary algorithms as it helps with global exploration and facilitates the use of crossover. Despite many runtime analyses showing advantages of population diversity, we have no clear picture of how diversity evolves over time. We study how the population diversity of (μ+1)algorithms, measured by the sum of pairwise Hamming distances, evolves in a fitness-neutral environment. We give an exact formula for the drift of population diversity and show that it is driven towards an equilibrium state. Moreover, we bound the expected time for getting close to the equilibrium state. We find that these dynamics, including the location of the equilibrium, are unaffected by surprisingly many algorithmic choices. All unbiased mutation operators with the same expected number of bit flips have the same effect on the expected diversity. Many crossover operators have no effect at all, including all binary unbiased, respectful operators. We review crossover operators from the literature and identify crossovers that are neutral towards the evolution of diversity and crossovers that are not. KW - Evolutionary algorithms KW - Runtime analysis KW - Diversity KW - Population dynamics Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2406282100292.409184699704 SN - 0178-4617 SN - 1432-0541 VL - 86 IS - 7 SP - 2317 EP - 2351 PB - Springer US CY - New York ER - TY - JOUR A1 - Schulte, Lukas A1 - Ledel, Benjamin A1 - Herbold, Steffen T1 - Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP JF - Empirical Software Engineering (ISSN: 1573-7616) N2 - Context The identification of bugs within issues reported to an issue tracking system is crucial for triage. Machine learning models have shown promising results for this task. However, we have only limited knowledge of how such models identify bugs. Explainable AI methods like LIME and SHAP can be used to increase this knowledge. Objective We want to understand if explainable AI provides explanations that are reasonable to us as humans and align with our assumptions about the model’s decision-making. We also want to know if the quality of predictions is correlated with the quality of explanations. Methods We conduct a study where we rate LIME and SHAP explanations based on their quality of explaining the outcome of an issue type prediction model. For this, we rate the quality of the explanations, i.e., if they align with our expectations and help us understand the underlying machine learning model. Results We found that both LIME and SHAP give reasonable explanations and that correct predictions are well explained. Further, we found that SHAP outperforms LIME due to a lower ambiguity and a higher contextuality that can be attributed to the ability of the deep SHAP variant to capture sentence fragments. Conclusion We conclude that the model finds explainable signals for both bugs and non-bugs. Also, we recommend that research dealing with the quality of explanations for classification tasks reports and investigates rater agreement, since the rating of explanations is highly subjective. KW - Explainable AI KW - LIME KW - SHAP KW - Issue type prediction Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2409232103207.812648424894 SN - 1382-3256 SN - 1573-7616 VL - 29 IS - 4 PB - Springer US CY - New York ER - TY - JOUR A1 - Aistleitner, Christoph A1 - Frühwirth, Lorenz A1 - Prochno, Joscha T1 - Diophantine conditions in the law of the iterated logarithm for lacunary systems JF - Probability Theory and Related Fields (ISSN: 1432-2064) N2 - It is a classical observation that lacunary function systems exhibit many properties which are typical for systems of independent random variables. However, it had already been observed by Erdős and Fortet in the 1950s that probability theory’s limit theorems may fail for lacunary sums (sum f(n_k x)) if the sequence ((n_k)_{k ge 1}) has a strong arithmetic “structure”. The presence of such structure can be assessed in terms of the number of solutions k, l of two-term linear Diophantine equations (an_k - bn_l = c). As the first author proved with Berkes in 2010, saving an (arbitrarily small) unbounded factor for the number of solutions of such equations compared to the trivial upper bound, rules out pathological situations as in the Erdős–Fortet example, and guarantees that (sum f(n_k x)) satisfies the central limit theorem (CLT) in a form which is in accordance with true independence. In contrast, as shown by the first author, for the law of the iterated logarithm (LIL) the Diophantine condition which suffices to ensure “truly independent” behavior requires saving this factor of logarithmic order. In the present paper we show that, rather surprisingly, saving such a logarithmic factor is actually the optimal condition in the LIL case. This result reveals the remarkable fact that the arithmetic condition required of ((n_k)_{k ge 1}) to ensure that (sum f(n_k x)) shows “truly random” behavior is a different one at the level of the CLT than it is at the level of the LIL: the LIL requires a stronger arithmetic condition than the CLT does. KW - Lacunary trigonometric sums KW - Law of the iterated logarithm KW - Diophantine equations Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2406190937284.886045166092 SN - 0178-8051 SN - 1432-2064 VL - 192 IS - 1 SP - 545 EP - 574 PB - Springer CY - Berlin/Heidelberg ER - TY - JOUR A1 - Beurskens, Michael A1 - Scherzinger, Stefanie T1 - Legal Perspectives on Research Data Storage JF - Datenbank-Spektrum (ISSN: 1618-2162) N2 - Responsibly managing research data has become increasingly important for researchers, especially within the database research community. Despite significant progress in best practices, the state-of-the-art in research data storage is lacking from a legal perspective. We introduce the stakeholders and the dynamic nature of their relationships (such as researchers changing affiliation) and observe that no existing infrastructure for research data storage fully meets their requirements. Therefore, we emphasize the need to design a comprehensive system architecture for research data storage that is aligned with legal considerations from the start, rather than as an afterthought. KW - Storage Contract KW - Copyright KW - Sui Generis Right KW - Privacy KW - Trade Secrets KW - Research Data Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2410012125227.647518861484 SN - 1618-2162 SN - 1610-1995 VL - 24 IS - 2 SP - 85 EP - 95 PB - Springer CY - Berlin/Heidelberg ER - TY - JOUR A1 - Franke, Jan A1 - Heinrich, Florian A1 - Reisch, Raven T. T1 - Vision based process monitoring in wire arc additive manufacturing (WAAM) JF - Journal of Intelligent Manufacturing (ISSN: 1572-8145) N2 - A stable welding process is crucial to obtain high quality parts in wire arc additive manufacturing. The complexity of the process makes it inherently unstable, which can cause various defects, resulting in poor geometric accuracy and material properties. This demands for in-process monitoring and control mechanisms to industrialize the technology. In this work, process monitoring algorithms based on welding camera image analysis are presented. A neural network for semantic segmentation of the welding wire is used to monitor the working distance as well as the horizontal position of the wire during welding and classic image processing techniques are applied to capture spatter formation. Using these algorithms, the process stability is evaluated in real time and the analysis results enable the direction independent closed-loop-control of the manufacturing process. This significantly improves geometric fidelity as well as mechanical properties of the fabricated part and allows the automated production of parts with complex deposition paths including weld bead crossings, curvatures and overhang structures. KW - Wire arc additive manufacturing KW - Vision based monitoring KW - Machine learning KW - Nozzle-to-work distance monitoring KW - Contact tube wear off detection KW - Spatter detection Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2405052141549.085870766506 SN - 0956-5515 SN - 1572-8145 VL - 36 IS - 3 SP - 1711 EP - 1721 PB - Springer US CY - New York ER - TY - JOUR A1 - Andraschko, Bernhard A1 - Danner, Julian A1 - Kreuzer, Martin T1 - SAT Solving Using XOR-OR-AND Normal Forms JF - Mathematics in Computer Science (ISSN: 1661-8289) N2 - This paper introduces the XOR-OR-AND normal form (XNF) for logical formulas. It is a generalization of the well-known Conjunctive Normal Form (CNF) where literals are replaced by XORs of literals. As a first theoretic result, we show that every CNF formula is equisatisfiable to a formula in 2-XNF, i.e., a formula in XNF where each clause involves at most two XORs of literals. Subsequently, we present an algorithm which converts Boolean polynomials efficiently from their Algebraic Normal Form (ANF) to formulas in 2-XNF. Experiments with the cipher ASCON-128 show that cryptographic problems, which by design are based strongly on XOR-operations, can be represented using far fewer variables and clauses in 2-XNF than in CNF. In order to take advantage of this compact representation, new SAT solvers based on input formulas in 2-XNF need to be designed. By taking inspiration from graph-based 2-CNF SAT solving, we devise a new DPLL-based SAT solver for formulas in 2-XNF. Among others, we present advanced pre- and in-processing techniques. Finally, we give timings for random 2-XNF instances and instances related to key recovery attacks on round reduced ASCON-128, where our solver outperforms state-of-the-art alternative solving approaches. KW - - KW - SAT solving KW - XOR constraint KW - Algebraic normal form KW - Implication graph KW - Cryptographic attack KW - 03B70 KW - 13P15 KW - 05C90 KW - 94A60 Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2502242130589.859216082138 SN - 1661-8270 SN - 1661-8289 VL - 18 IS - 4 PB - Springer International Publishing CY - Cham ER - TY - JOUR A1 - Kosch, Harald A1 - Brunie, Lionel A1 - Mayer, Tobias A1 - Hasan, Omar A1 - Schiedermeier, Maximilian T1 - Anonymous voting using distributed ledger-assisted secure multi-party computation JF - Applied Network Science N2 - High voter turnout in elections and referendums is desirable to ensure a robust democracy. Secure electronic voting is a vision for the future of elections and referendums. Such a system can counteract factors hindering strong voter turnout such as the requirement of physical presence during limited hours at polling stations. However, this vision brings transparency and confidentiality requirements that render the design of such solutions challenging. Specifically, the counting implementation must support reproducibility, and the choice of individual voters must remain confidential. In this paper, we propose and evaluate a novel referendum protocol that ensures transparency, confidentiality, and integrity, in trustless networks. The protocol is built by combining secure multi-party computation and distributed ledger technology, e.g., a Blockchain. The persistence and immutability of the protocol communication allow verifiability of the referendum outcome by any participant. Voters therefore do not need to trust third parties. We provide a formal description and conduct a thorough security evaluation of our proposal. KW - E-voting KW - Distributed ledger KW - Trustless networks KW - Transparency Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-15176 VL - 2024 IS - 9 PB - Springer International Publishing CY - Cham (Schweiz) ER - TY - JOUR A1 - Ghodselahi, Abdolhamid A1 - Kuhn, Fabian T1 - Toward Online Mobile Facility Location on General Metrics JF - Theory of Computing Systems N2 - We introduce an online variant of mobile facility location (MFL) (introduced by Demaine et al. (SODA 258–267 2007)). We call this new problem online mobile facility location (OMFL). In the OMFL problem, initially, we are given a set of k mobile facilities with their starting locations. One by one, requests are added. After each request arrives, one can make some changes to the facility locations before the subsequent request arrives. Each request is always assigned to the nearest facility. The cost of this assignment is the distance from the request to the facility. The objective is to minimize the total cost, which consists of the relocation cost of facilities and the distance cost of requests to their nearest facilities. We provide a lower bound for the OMFL problem that even holds on uniform metrics. A natural approach to solve the OMFL problem for general metric spaces is to utilize hierarchically well-separated trees (HSTs) and directly solve the OMFL problem on HSTs. In this paper, we provide the first step in this direction by solving a generalized variant of the OMFL problem on uniform metrics that we call G-OMFL. We devise a simple deterministic online algorithm and provide a tight analysis for the algorithm. The second step remains an open question. Inspired by the k-server problem, we introduce a new variant of the OMFL problem that focuses solely on minimizing movement cost. We refer to this variant as M-OMFL. Additionally, we provide a lower bound for M-OMFL that is applicable even on uniform metrics. KW - Mobile resources KW - Online requests KW - Competitive analysis KW - General cost function KW - Movement minimization Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2024022609363705445898 VL - 67 IS - 6 SP - 1268 EP - 1306 PB - Springer Nature CY - Berlin ER - TY - JOUR A1 - von der Heyde, Markus A1 - Gerl, Armin T1 - Entwicklungsstand der CIO-Funktion und hochschulübergreifenden IT-Governance im Kontext der Digitalen Transformation an Hochschulen in Bayern JF - HMD Praxis der Wirtschaftsinformatik N2 - Die Hochschulen befinden sich durch vielfältige Veränderungs-prozesse in Verbindung mit dem Einsatz von Informationstechnologien (IT) aufdem Weg der Digitalen Transformation. Diese Digitale Transformation der Hoch-schulen umfasst intensive Veränderungsprozesse in der gesamten Hochschulkulturin Lehre, Forschung und Verwaltung in übergreifender und strukturierter Weise.Seit vielen Jahren werden vielfältige Digitalisierungsvorhaben zur Modernisierungvon einzelnen Prozessen an den Hochschulen umgesetzt. Die Leitungen der Re-chenzentren leisten mit der Umsetzung von IT-Projekten einen zentralen Beitrag zudiesem Wandel. Mit der Einführung der CIO-Funktion in den Hochschulleitungenund der hochschulübergreifenden Kooperationen hat sich die IT-Governance wei-terentwickelt. Insbesondere für die Digitale Transformation werden Strukturen zurKoordination der übergreifenden Vorhaben benötigt, wobei zusätzlich zur IT-Lei-tung eine Vielzahl von Funktionsträgern mit fachlichen Aufgaben aus Forschung,Lehre und Verwaltung involviert ist. Es stellt sich die Frage, wie die Digitale Trans-formation an Hochschulen gesteuert werden kann und in welcher organisatorischenForm sich die Aufgaben und Verantwortlichkeiten im Hochschulkontext realisierenlassen. An der Weiterentwicklung der IT-Governance an bayerischen Hochschulenwird beispielhaft erläutert, welche übergreifenden Aufgaben der Koordination vonBedarf und Versorgung mit IT-Services zwischen und innerhalb der Hochschulenbestehen. Die CIO-Funktion wird durch die Verankerung in der Leitungsebene derFunktion des Chief Digital Officers (CDO) aus der Wirtschaft ähnlicher, auch wennin Hochschulen aufgrund der klassischen Ressort-Einteilung die Rolle oft als Vize-präsident:in für Digitalisierung bezeichnet wird. KW - Digitale Transformation KW - IT-Governance KW - Hochschulen KW - Bildung KW - CIO KW - CDO Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2022072222053919381076 VL - 2022 IS - 59 SP - 881 EP - 895 PB - Springer Nature CY - Berlin ER - TY - JOUR A1 - Hevia Fajardo, Mario Alejandro A1 - Sudholt, Dirk T1 - Self-adjusting Population Sizes for Non-elitist Evolutionary Algorithms: why Success Rates Matter JF - Algorithmica N2 - Evolutionary algorithms (EAs) are general-purpose optimisers that come with several parameters like the sizes of parent and offspring populations or the mutation rate. It is well known that the performance of EAs may depend drastically on these parameters. Recent theoretical studies have shown that self-adjusting parameter control mechanisms that tune parameters during the algorithm run can provably outperform the best static parameters in EAs on discrete problems. However, the majority of these studies concerned elitist EAs and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist EAs. We study one of the best-known parameter control mechanisms, the one-fifth success rule, to control the offspring population size λ in the non-elitist (1, λ) EA. It is known that the (1, λ) EA has a sharp threshold with respect to the choice of λ where the expected runtime on the benchmark function OneMax changes from polynomial to exponential time. Hence, it is not clear whether parameter control mechanisms are able to find and maintain suitable values of λ. For OneMax we show that the answer crucially depends on the success rates (i. e. a one-(s + 1)-th success rule). We prove that, if the success rate is appropriately small, the self-adjusting (1, λ) EA optimises OneMax in O(n) expected generations and O(n log n) expected evaluations, the best possible runtime for any unary unbiased black-box algorithm. A small success rate is crucial: we also show that if the success rate is too large, the algorithm has an exponential runtime on OneMax and other functions with similar characteristics. KW - Evolutionary algorithms KW - Parameter control KW - Theory KW - Runtime analysis KW - Non-elitism Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:101:1-2023102616463077837436 VL - 86 IS - 2 SP - 526 EP - 565 PB - Springer Nature CY - Berlin ER -