TY - THES A1 - Kell, Christian T1 - A Structure-based Attack on the Linearized Braid Group-based Diffie-Hellman Conjugacy Problem in Combination with an Attack using Polynomial Interpolation and the Chinese Remainder Theorem N2 - This doctoral thesis is dedicated to improve a linear algebra attack on the so-called braid group-based Diffie-Hellman conjugacy problem (BDHCP). The general procedure of the attack is to transform a BDHCP to the problem of solving several simultaneous matrix equations. A first improvement is achieved by reducing the solution space of the matrix equations to matrices that have a specific structure, which we call here the left braid structure. Using the left braid structure the number of matrix equations to be solved reduces to one. Based on the left braid structure we are further able to formulate a structure-based attack on the BDHCP. That is to transform the matrix equation to a system of linear equations and exploiting the structure of the corresponding extended coefficient matrix, which is induced by the left braid structure of the solution space. The structure-based attack then has an empirically high probability to solve the BDHCP with significantly less arithmetic operations than the original attack. A third improvement of the original linear algebra attack is to use an algorithm that combines Gaussian elimination with integer polynomial interpolation and the Chinese remainder theorem (CRT), instead of fast matrix multiplication as suggested by others. The major idea here is to distribute the task of solving a system of linear equations over a giant finite field to several much smaller finite fields. Based on our empirically measured bounds for the degree of the polynomials to be interpolated and the bit size of the coefficients and integers to be recovered via the CRT, we conclude an improvement of the run time complexity of the original algorithm by a factor of n^8 bit operations in the best case, and still n^6 in the worst case. KW - Linear algebra attack KW - Braid group-based cryptography KW - Row echelon form calculation using polynomial interpolation and the chinese remainder theorem KW - Diffie-Hellman conjugacy problem KW - Kryptologie KW - Zopfgruppe KW - Diffie-Hellman-Algorithmus Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6476 ER - TY - THES A1 - Tueno, Anselme T1 - Multiparty Protocols for Tree Classifiers N2 - Cryptography is the scientific study of techniques for securing information and communication against adversaries. It is about designing and analyzing encryption schemes and protocols that protect data from unauthorized reading. However, in our modern information-driven society with highly complex and interconnected information systems, encryption alone is no longer enough as it makes the data unintelligible, preventing any meaningful computation without decryption. On the one hand, data owners want to maintain control over their sensitive data. On the other hand, there is a high business incentive for collaborating with an untrusted external party. Modern cryptography encompasses different techniques, such as secure multiparty computation, homomorphic encryption or order-preserving encryption, that enable cloud users to encrypt their data before outsourcing it to the cloud while still being able to process and search on the outsourced and encrypted data without decrypting it. In this thesis, we rely on these cryptographic techniques for computing on encrypted data to propose efficient multiparty protocols for order-preserving encryption, decision tree evaluation and kth-ranked element computation. We start with Order-preserving encryption (OPE) which allows encrypting data, while still enabling efficient range queries on the encrypted data. However, OPE is symmetric limiting, the use case to one client and one server. Imagine a scenario where a Data Owner (DO) outsources encrypted data to the Cloud Service Provider (CSP) and a Data Analyst (DA) wants to execute private range queries on this data. Then either the DO must reveal its encryption key or the DA must reveal the private queries. We overcome this limitation by allowing the equivalent of a public-key OPE. Decision trees are common and very popular classifiers because they are explainable. The problem of evaluating a private decision tree on private data consists of a server holding a private decision tree and a client holding a private attribute vector. The goal is to classify the client’s input using the server’s model such that the client learns only the result of the classification, and the server learns nothing. In a first approach, we represent the tree as an array and execute only d interactive comparisons (instead of 2 d as in existing solutions), where d denotes the depth of the tree. In a second approach, we delegate the complete tree evaluation to the server using somewhat or fully homomorphic encryption where the ciphertexts are encrypted under the client’s public key. A generalization of a decision tree is a random forest that consists of many decision trees. A classification with a random forest evaluates each decision tree in the forest and outputs the classification label which occurs most often. Hence, the classification labels are ranked by their number of occurrences and the final result is the best ranked one. The best ranked element is a special case of the kth-ranked element. In this thesis, we consider the secure computation of the kth-ranked element in a distributed setting with applications in benchmarking and auctions. We propose different approaches for privately computing the kth-ranked element in a star network, using either garbled circuits or threshold homomorphic encryption. KW - Mathematik KW - Kryptologie Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8251 ER - TY - THES A1 - Taubmann, Benjamin T1 - Improving Digital Forensics and Incident Analysis in Production Environments by Using Virtual Machine Introspection N2 - Main memory forensics and its special form, virtual machine introspection (VMI), are powerful tools for digital forensics and can be used to improve the security of computer-based systems. However, their use in production systems is often not possible. This work identifies the causes and offers practical solutions to apply these techniques in cloud computing and on mobile devices to improve digital forensics and incident analysis. Four key challenges must be tackled. The first challenge is that many existing solutions are not reproducible, for example, because the corresponding software components are not available, obsolete or incompatible. The use of these tools is also often complex and can lead to a crash of the system to be monitored in case of incorrect use. To solve this problem, this thesis describes the design and implementation of Libvmtrace, which is a framework for the introspection of Linux-based virtual machines. The focus of the developed design is to implement frequently used methods in encapsulated modules so that they are easy for developers to use, optimize and test. The second challenge is that many production systems do not provide an interface for main memory forensics and virtual machine introspection. To address this problem, this thesis describes possible solutions for how such an interface can be implemented on mobile devices and in cloud environments designed to protect main memory from unprivileged access. We discuss how cold boot attacks, the ARM TrustZone and the hypervisor of cloud servers can be used to acquire data from storage. The third challenge is how to reconstruct information from main memory efficiently. This thesis describes how these questions can be solved by employing two practical examples. The first example involves extracting the keys of encrypted TLS connections from the main memory of applications to decrypt network traffic without affecting the performance of the monitored application. The TLSKex and DroidKex architecture describe two approaches to localize the keys efficiently with the help of semantic knowledge in the main memory of applications. The second example discusses how to monitor and document SSH sessions of potential attackers from outside of a virtual machine. It is important that the monitoring routines are not noticed by an attacker. To achieve this, we evaluate how to optimize the performance of the monitoring mechanism. The fourth challenge is how to deal with the performance degradation caused by introspection in productive systems. This thesis discusses how this can be achieved using the example of a SIEM system. To reduce the performance overhead, we describe how to configure the monitoring routine to collect only the information needed to detect incidents. Also, we describe two approaches that permit the monitoring routine to be dynamically adjusted at runtime to extract more information if necessary so that incidents can be better analyzed. KW - Digital Forensics KW - Virtual Machine Introspection KW - Production Environments KW - Incident Detection KW - Computerforensik KW - Eindringerkennung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8319 ER - TY - THES A1 - Kurz, Thomas T1 - Adapting Semantic Web Information Retrieval to Multimedia N2 - The amount of audio, video and image data on the Web is immensely growing, which leads to data management problems based on the hidden character of Multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the Internet of documents and the Web of Data has become a common practice. However, the value of connecting media to its semantic meta data is limited due to lacking access methods and the absence of an adapted query language specialized for media assets and fragments. This thesis aims to extend the standard query language for the Semantic Web (SPARQL) with media specific concepts and functions. The main contributions of the work are an exhaustive survey on Multimedia query languages of the last 3 decades, the SPARQL extension specification itself and an approach for the efficient evaluation of the new query concepts. Additionally I elaborate and evaluate a meta data based media fragment similarity approach, which provides a basis for further language extensions. N2 - Das Wachstum multimedialer Daten wie Audio, Video und Bilder war in den letzten Jahren immens. Das Besondere an dieser Art der Daten ist die versteckte Semantik, die sich nur schwer mit herkömmlichen Information Retrieval Funktionen verbinden lässt und dadurch zu Problemen im Management der Multimedia Daten führt. Konzepte des Semantic Web eignen sich allerdings sehr gut, diese Lücke zu schließen, was sich in vielen Szenarien bereits positiv etabliert hat. Nichtsdestotrotz fehlen mit geeigneten Zugriffsmethoden und einer adaptierten Anfragesprache wichtige Teile, um dieses Konzept der verlinkten Multimedia Daten abzurunden und voll in einem End-to-End Prozess zu verwenden. In dieser Arbeit stelle ich eine Erweiterung der Standard-Anfragesprache im Semantic Web (SPARQL) um multimedia-spezifische Funktionen vor. Der wissenschaftliche Beitrag lässt sich dabei in drei Teile gliedern: Ein umfassendes Survey zu Multimedia Anfragesprachen der letzten 30 Jahre, die Erweiterung von SPARQL inklusive einer geeigneten Methodik zur Anfrageoptimierung, sowie ein Ansatz zur fragment-basierten Ähnlichkeitsberechnung von Bildern mit zugehöriger Evaluierung. KW - SPARQL KW - Semantic Web KW - Multimedia KW - SPARQL KW - Multimedia KW - Semantic Web KW - Web of Data KW - SPARQL-MM Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8276 ER - TY - THES A1 - Ehlers, Christoph T1 - Top-k Semantic Caching N2 - The subject of this thesis is the intelligent caching of top-k queries in an environment with high latency and low throughput. In such an environment, caching can be used to reduce network traffic and improve response time. Slow database connections of mobile devices and to databases, which have been offshored, are practical use cases. A semantic cache is a query-based cache that caches query results and maintains their semantic description. It reuses partial matches of previous query results. Each query that is processed by the semantic cache is split into two disjoint parts: one that can be completely answered with tuples of the cache probe query, and another that requires tuples to be transferred from the server (remainder query). Existing semantic caches do not support top-k queries, i.e., ordered and limited queries. In this thesis, we present an innovative semantic cache that naturally supports top-k queries. The support of top-k queries in a semantic cache has considerable effects on cache elements, operations on cache elements -- like creation, difference, intersection, and union -- and query answering. Hence, we introduce new techniques for cache management and query processing. They enable the semantic cache to become a true top-k semantic cache. In addition, we have developed a new algorithm that can estimate the lower bounds of query results of sorted queries using multidimensional histograms. Using this algorithm, our top-k semantic cache is able to pipeline partial query results of top-k queries. Thereby, query execution performance can be significantly increased. We have implemented a prototype of a top-k semantic cache called IQCache (Intelligent Query Cache). An extensive and thorough evaluation with various benchmarks using our prototype demonstrates the applicability and performance of top-k semantic caching in practice. The experiments prove that the top-k semantic cache invariably outperforms simple hash-based caching strategies and scales very well. KW - Database KW - Caching KW - Semantic Caching KW - Top-k KW - Mobile KW - Semantisches Caching KW - Abfrageverarbeitung Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3055 ER - TY - THES A1 - Braun, Bastian T1 - Web-based Secure Application Control N2 - The world wide web today serves as a distributed application platform. Its origins, however, go back to a simple delivery network for static hypertexts. The legacy from these days can still be observed in the communication protocol used by increasingly sophisticated clients and applications. This thesis identifies the actual security requirements of modern web applications and shows that HTTP does not fit them: user and application authentication, message integrity and confidentiality, control-flow integrity, and application-to-application authorization. We explore the other protocols in the web stack and work out why they can not fill the gap. Our analysis shows that the underlying problem is the connectionless property of HTTP. However, history shows that a fresh start with web communication is far from realistic. As a consequence, we come up with approaches that contribute to meet the identified requirements. We first present impersonation attack vectors that begin before the actual user authentication, i.e. when secure web interaction and authentication seem to be unnecessary. Session fixation attacks exploit a responsibility mismatch between the web developer and the used web application framework. We describe and compare three countermeasures on different implementation levels: on the source code level, on the framework level, and on the network level as a reverse proxy. Then, we explain how the authentication credentials that are transmitted for the user login, i.e. the password, and for session tracking, i.e. the session cookie, can be complemented by browser-stored and user-based secrets respectively. This way, an attacker can not hijack user accounts only by phishing the user's password because an additional browser-based secret is required for login. Also, the class of well-known session hijacking attacks is mitigated because a secret only known by the user must be provided in order to perform critical actions. In the next step, we explore alternative approaches to static authentication credentials. Our approach implements a trusted UI and a mutually authenticated session using signatures as a means to authenticate requests. This way, it establishes a trusted path between the user and the web application without exchanging reusable authentication credentials. As a downside, this approach requires support on the client side and on the server side in order to provide maximum protection. Another approach avoids client-side support but can not implement a trusted UI and is thus susceptible to phishing and clickjacking attacks. Our approaches described so far increase the security level of all web communication at all time. This is why we investigate adaptive security policies that fit the actual risk instead of permanently restricting all kinds of communication including non-critical requests. We develop a smart browser extension that detects when the user is authenticated on a website meaning that she can be impersonated because all requests carry her identity proof. Uncritical communication, however, is released from restrictions to enable all intended web features. Finally, we focus on attacks targeting a web application's control-flow integrity. We explain them thoroughly, check whether current web application frameworks provide means for protection, and implement two approaches to protect web applications: The first approach is an extension for a web application framework and provides protection based on its configuration by checking all requests for policy conformity. The second approach generates its own policies ad hoc based on the observed web traffic and assuming that regular users only click on links and buttons and fill forms but do not craft requests to protected resources. N2 - Das heutige World Wide Web ist eine verteilte Plattform für Anwendungen aller Art: von einfachen Webseiten über Online Banking, E-Mail, multimediale Unterhaltung bis hin zu intelligenten vernetzten Häusern und Städten. Seine Ursprünge liegen allerdings in einem einfachen Netzwerk zur Übermittlung statischer Inhalte auf der Basis von Hypertexten. Diese Ursprünge lassen sich noch immer im verwendeten Kommunikationsprotokoll HTTP identifizieren. In dieser Arbeit untersuchen wir die Sicherheitsanforderungen moderner Web-Anwendungen und zeigen, dass HTTP diese Anforderungen nicht erfüllen kann. Zu diesen Anforderungen gehören die Authentifikation von Benutzern und Anwendungen, die Integrität und Vertraulichkeit von Nachrichten, Kontrollflussintegrität und die gegenseitige Autorisierung von Anwendungen. Wir untersuchen die Web-Protokolle auf den unteren Netzwerk-Schichten und zeigen, dass auch sie nicht die Sicherheitsanforderungen erfüllen können. Unsere Analyse zeigt, dass das grundlegende Problem in der Verbindungslosigkeit von HTTP zu finden ist. Allerdings hat die Geschichte gezeigt, dass ein Neustart mit einem verbesserten Protokoll keine Option für ein gewachsenes System wie das World Wide Web ist. Aus diesem Grund beschäftigt sich diese Arbeit mit unseren Beiträgen zu sicherer Web-Kommunikation auf der Basis des existierenden verbindungslosen HTTP. Wir beginnen mit der Beschreibung von Session Fixation-Angriffen, die bereits vor der eigentlichen Anmeldung des Benutzers an der Web-Anwendung beginnen und im Erfolgsfall die temporäre Übernahme des Benutzerkontos erlauben. Wir präsentieren drei Gegenmaßnahmen, die je nach Eingriffsmöglichkeiten in die Web-Anwendung umgesetzt werden können. Als nächstes gehen wir auf das Problem ein, dass Zugangsdaten im WWW sowohl zwischen den Teilnehmern zu Authentifikationszwecken kommuniziert werden als auch für jeden, der Kenntnis dieser Daten erlangt, wiederverwendbar sind. Unsere Ansätze binden das Benutzerpasswort an ein im Browser gespeichertes Authentifikationsmerkmal und das sog. Session-Cookie an ein Geheimnis, das nur dem Benutzer und der Web-Anwendung bekannt ist. Auf diese Weise kann ein Angreifer weder ein gestohlenes Passwort noch ein Session-Cookie allein zum Zugriff auf das Benutzerkonto verwenden. Darauffolgend beschreiben wir ein Authentifikationsprotokoll, das vollständig auf die Übermittlung geheimer Zugangsdaten verzichtet. Unser Ansatz implementiert eine vertrauenswürdige Benutzeroberfläche und wirkt so gegen die Manipulation derselben in herkömmlichen Browsern. Während die bisherigen Ansätze die Sicherheit jeglicher Web-Kommunikation erhöhen, widmen wir uns der Frage, inwiefern ein intelligenter Browser den Benutzer - wenn nötig - vor Angriffen bewahren kann und - wenn möglich - eine ungehinderte Kommunikation ermöglichen kann. Damit trägt unser Ansatz zur Akzeptanz von Sicherheitslösungen bei, die ansonsten regelmäßig als lästige Einschränkungen empfunden werden. Schließlich legen wir den Fokus auf die Kontrollflussintegrität von Web-Anwendungen. Bösartige Benutzer können den Zustand von Anwendungen durch speziell präparierte Folgen von Anfragen in ihrem Sinne manipulieren. Unsere Ansätze filtern Benutzeranfragen, die von der Anwendung nicht erwartet wurden, und lassen nur solche Anfragen passieren, die von der Anwendung ordnungsgemäß verarbeitet werden können. KW - Computersicherheit KW - Datensicherung KW - Internet Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3048 ER - TY - THES A1 - Tran, Nguyen Khanh Linh T1 - Kaehler Differential Algebras for 0-Dimensional Schemes and Applications N2 - The aim of this dissertation is to investigate Kaehler differential algebras and their Hilbert functions for 0-dimensional schemes in P^n. First we give relations between Kaehler differential 1-forms of fat point schemes and another fat point schemes. Then we determine the Hilbert polynomial and give a sharp bound for the regularity index of the module of Kaehler differential m-forms, for 05%) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms. KW - Entity Linking KW - Entity Disambiguation KW - Neuronal Networks KW - Embeddings Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3704 ER - TY - THES A1 - Alshawish, Ali T1 - Risk-based Security Management in Critical Infrastructure Organizations N2 - Critical infrastructure and contemporary business organizations are experiencing an ongoing paradigm shift of business towards more collaboration and agility. On the one hand, this shift seeks to enhance business efficiency, coordinate large-scale distribution operations, and manage complex supply chains. But, on the other hand, it makes traditional security practices such as firewalls and other perimeter defenses insufficient. Therefore, concerns over risks like terrorism, crime, and business revenue loss increasingly impose the need for enhancing and managing security within the boundaries of these systems so that unwanted incidents (e.g., potential intrusions) can still be detected with higher probabilities. To this end, critical infrastructure organizations step up their efforts to investigate new possibilities for actively engaging in situational awareness practices to ensure a high level of persistent monitoring as well as on-site observation. Compliance with security standards is necessary to ensure that organizations meet regulatory requirements mostly shaped by a set of best practices. Nevertheless, it does not necessarily result in a coherent security strategy that considers the different aims and practical constraints of each organization. In this regard, there is an increasingly growing demand for risk-based security management approaches that enable critical infrastructures to focus their efforts on mitigating the risks to which they are exposed. Broadly speaking, security management involves the identification, assessment, and evaluation of long-term (or overall) objectives and interests as well as the means of achieving them. Due to the critical role of such systems, their decision-makers tend to enhance the system resilience against very unpleasant outcomes and severe consequences. That is, they seek to avoid decision options associated with likely extreme risks in the first place. Practically speaking, this risk attitude can significantly influence the decision-making process in such critical organizations. Towards incorporating the aversion to extreme risks into security management decisions, this thesis investigates thoroughly the capabilities of a recently emerged theory of games with payoffs that are probability distributions. Unlike traditional optimization techniques, this theory provides an alternative decision technique that is more robust to extreme risks and uncertainty. Furthermore, this thesis proposes a new method that gives a decision maker more control over the decision-making process through defining loss regions with different importance levels according to people's risk attitudes. In this way, the static decision analysis used in the distribution-valued games is transformed into a dynamic process to adapt to different subjective risk attitudes or account for future changes in the decision caused by a learning process or other changes in the context. Throughout their different parts, this thesis shows how theoretical models, simulation, and risk assessment models can be combined into practical solutions. In this context, it deals with three facets of security management: allocating limited security resources, prioritizing security actions, and tweaking decision making. Finally, the author discusses experiences and limitations distilled from this research and from investigating the new theory of games, which can be taken into account in future approaches. KW - Security Management KW - Game Theory KW - Critical Infrastructures KW - Risk Attitude KW - Uncertainty KW - Spieltheorie KW - Risikomanagement Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10026 ER - TY - THES A1 - Silva, Vivian dos Santos T1 - A Composite Syntactic-Semantic Interpretable Text Entailment Approach Exploring Commonsense Knowledge Graphs N2 - Natural Language Processing has an important role in Artificial Intelligence for easing human-machine interaction. Processing human language, though, poses many challenges, among which is the semantics-related phenomenon known as language variability, the fact that the same thing can be said in several ways. NLP applications' inputs and outputs can be expressed in different forms, whose equivalence can be verified through inference. The textual entailment paradigm was established to enable the creation of a unifying framework for applied inference, providing a means of delivering other NLP task from handling inference issues in an ad-hoc manner, using instead the outputs of an inference-dedicated mechanism. Text entailment, the task of determining whether a piece of text logically follows from another piece of text, involves different scenarios, which can range from a simple syntactic variation to more complex semantic relationships between sentences. However, most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. The commonsense world knowledge necessary to support more complex inferences is also usually employed in a limited way, with most approaches sticking to shallow semantic information, leaving more elaborate semantic relationships aside. Furthermore, most systems still work as a "black box", providing a yes/no answer that does not explain the underlying reasoning process. This thesis aims at addressing these issues by proposing a composite interpretable approach for recognizing text entailment where the entailment pair is analyzed so the most relevant phenomenon is detected and the suitable method can be used to solve it. Syntactic variations are dealt with through the analysis of the sentences' syntactic structures, and semantic relationships are detected with the aid of a knowledge graph built from natural language dictionary definitions. Also, if a semantic matching is involved, the answer is made interpretable through the generation of natural language justifications that explain the semantic relationship between the pieces of text. The result is the XTE - Explainable Text Entailment - a system that outperforms well-established tools based on single-technique entailment algorithms, and that also gives an important step towards Explainable AI, allowing the inference model interpretation, making the semantic reasoning process explicit and understandable. KW - Textual Entailment KW - Knowledge Graph KW - Semantic Interpretability Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10706 ER - TY - THES A1 - Opris, Andre T1 - Holomorphic Extensions in the Structure R_{an,exp} N2 - In this thesis we consider real analytic functions, i.e. functions which can be described locally as convergent power series and ask the following: Which real analytic functions definable in R_{an,exp} have a holomorphic extension which is again definable in R_{an,exp}? Finding a holomorphic extension is of course not difficult simply by power series expansion. The difficulty is to construct it in a definably way. We will not answer the question above completely, but introduce a large non trivial class of definable functions in R_{an,exp} where for example functions which are iterated compositions from either side of globally subanalytic functions and the global logarithm are contained. We call them restricted log-exp-analytic. After giving some preliminary results like preparation theorems and Tamm's Theorem for this class of functions we are able to show that real analytic restricted log-exp-analytic functions have a holomorphic extension which is again restricted log-exp-analytic. KW - O-Minimality KW - Preparation Theorems KW - Restricted Log-Exp-Analytic Functions KW - Complexification KW - Tamm's Theorem KW - O-Minimalität Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10691 ER - TY - THES A1 - Schmid, Josef T1 - Learning-Based Quality of Service Prediction in Cellular Vehicle Communication N2 - Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. A new area where this development is on the rise is the field of connected vehicles. It is especially useful for automated vehicles in order to connect the vehicles with other road users or cloud services. In particular for the latter it is beneficial to establish a mobile network connection, as it is already widely used and no additional infrastructure is needed. With the use of network communication, certain requirements come along. One of them is the reliability of the connection. Certain Quality of Service (QoS) parameters need to be met. In case of degraded QoS, according to the SAE level specification, a downgrade of the automated system can be required, which may lead to a takeover maneuver, in which control is returned back to the driver. Since such a handover takes time, prediction is necessary to forecast the network quality for the next few seconds. Prediction of QoS parameters, especially in terms of Throughput (TP) and Latency (LA), is still a challenging task, as the wireless transmission properties of a moving mobile network connection are undergoing fluctuation. In this thesis, a new approach for prediction Network Quality Parameters (NQPs) on Transmission Control Protocol (TCP) level is presented. It combines the knowledge of the environment with the low level parameters of the mobile network. The aim of this work is to perform a comprehensive study of various models including both Location Smoothing (LS) grid maps and Learning Based (LB) regression ones. Moreover, the possibility of using the location independence of a model as well as suitability for automated driving is evaluated. N2 - Netzwerkkommunikation ist zu einem Teil des täglichen Lebens geworden, und die Vernetzung von Geräten und Menschen wird in Zukunft noch weiter zunehmen. Ein neuer Bereich, in dem diese Entwicklung zunimmt, sind vernetzte Fahrzeuge. Es ist vorteilhaft automatisierte Fahrzeuge mit anderen Verkehrsteilnehmern oder Cloud-Diensten zu verbinden. Insbesondere für letztere ist der Einsatz einer mobilen Netzwerkverbindung zweckmäßig, da sie bereits weit verbreitet ist und keine zusätzliche Infrastruktur erfordert. Mit der Nutzung des Netzwerkes gehen auch einige Anforderungen einher. Die Zuverlässigkeit der Verbindung ist entscheidend. Kann keine ausreichende Qualität der Verbindung erfüllt werden kann nach SAE Spezifikation das Herunterstufen der Automatisierungsstufe erforderlich sein. In letzter Konsequenz kann diese schließlich zu einem Übernahmemanöver führen, wobei die Kontrolle wieder an den Fahrer zurückgegeben wird. Da ein solcher Wechsel Zeit in Anspruch nimmt, ist eine Vorhersage erforderlich, um die Netzqualität in den nächsten Sekunden zu prognostizieren. Eine solche Vorhersage der Dienstgüte (Quality of Service (QoS)), besonders hinsichtlich Durchsatz und Latenz, nach wie vor eine recht anspruchsvolle Aufgabe, da die drahtlosen Übertragungseigenschaften einer sich bewegenden mobilen Netzwerkverbindung großen Schwankungen unterliegen. In dieser Dissertation wird ein neuer Ansatz für die Vorhersage von Network Quality Parameters (NQPs) auf der Ebene des Transmission Control Protocol (TCP) vorgestellt. Er kombiniert das Wissen der Umgebung mit den Parametern des Mobilfunknetzes. Das Ziel dieser Arbeit ist eine umfangreiche Untersuchung verschiedener Modelle, darunter sind sowohl Lokalisationsglättende Kachel-Karten wie auch Regressionsverfahren aus dem Bereich des Maschinellen Lernens. Darüber hinaus wird dessen die Möglichkeit der Nutzung der Ortsunabhängigkeit eines Modells erörtert sowie Eignung für automatisiertes Fahren evaluiert. Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10772 ER - TY - THES A1 - Niklaus, Christina T1 - From Complex Sentences to a Formal Semantic Representation using Syntactic Text Simplification and Open Information Extraction N2 - Sentences that present a complex linguistic structure act as a major stumbling block for Natural Language Processing (NLP) applications whose predictive quality deteriorates with sentence length and complexity. The task of Text Simplification (TS) may remedy this situation. It aims to modify sentences in order to make them easier to process, using a set of rewriting operations, such as reordering, deletion or splitting. These transformations are executed with the objective of converting the input into a simplified output, while preserving its main idea and keeping it grammatically sound. State-of-the-art syntactic TS approaches suffer from two major drawbacks: first, they follow a very conservative approach in that they tend to retain the input rather than transforming it, and second, they ignore the cohesive nature of texts, where context spread across clauses or sentences is needed to infer the true meaning of a statement. To address these problems, we present a discourse-aware TS framework that is able to split and rephrase complex English sentences within the semantic context in which they occur. By generating a fine-grained output with a simple canonical structure that is easy to analyze by downstream applications, we tackle the first issue. For this purpose, we decompose a source sentence into smaller units by using a linguistically grounded transformation stage. The result is a set of selfcontained propositions, with each of them presenting a minimal semantic unit. To address the second concern, we suggest not only to split the input into isolated sentences, but to also incorporate the semantic context in the form of hierarchical structures and semantic relationships between the split propositions. In that way, we generate a semantic hierarchy of minimal propositions that benefits downstream Open Information Extraction (IE) tasks. To function well, the TS approach that we propose requires syntactically well-formed input sentences. It targets generalpurpose texts in English, such as newswire or Wikipedia articles, which commonly contain a high proportion of complex assertions. In a second step, we present a method that allows state-of-the-art Open IE systems to leverage the semantic hierarchy of simplified sentences created by our discourseaware TS approach in constructing a lightweight semantic representation of complex assertions in the form of semantically typed predicate-argument structures. In that way, important contextual information of the extracted relations is preserved that allows for a proper interpretation of the output. Thus, we address the problem of extracting incomplete, uninformative or incoherent relational tuples that is commonly to be observed in existing Open IE approaches. Moreover, assuming that shorter sentences with a more regular structure are easier to process, the extraction of relational tuples is facilitated, leading to a higher coverage and accuracy of the extracted relations when operating on the simplified sentences. Aside from taking advantage of the semantic hierarchy of minimal propositions in existing Open IE Abstract approaches, we also develop an Open IE reference system, Graphene. It implements a relation extraction pattern upon the simplified sentences. The framework we propose is evaluated within our reference TS implementation DisSim. In a comparative analysis, we demonstrate that our approach outperforms the state of the art in structural TS both in an automatic and a manual analysis. It obtains the highest score on three simplification datasets from two different domains with regard to SAMSA (0.67, 0.57, 0.54), a recently proposed metric targeted at automatically measuring the syntactic complexity of sentences which highly correlates with human judgments on structural simplicity and grammaticality. These findings are supported by the ratings from the human evaluation, which indicate that our baseline implementation DisSim returns fine-grained simplified sentences that achieve a high level of syntactic correctness and largely preserve the meaning of the input. Furthermore, a comparative analysis with the annotations contained in the RST Discourse Treebank (RST-DT) reveals that we are able to capture the contextual hierarchy between the split sentences with a precision of approximately 90% and reach an average precision of almost 70% for the classification of the rhetorical relations that hold between them. Finally, an extrinsic evaluation shows that when applying our TS framework as a pre-processing step, the performance of state-ofthe-art Open IE systems can be improved by up to 32% in precision and 30% in recall of the extracted relational tuples. Accordingly, we can conclude that our proposed discourse-aware TS approach succeeds in transforming sentences that present a complex linguistic structure into a sequence of simplified sentences that are to a large extent grammatically correct, represent atomic semantic units and preserve the meaning of the input. Moreover, the evaluation provides sufficient evidence that our framework is able to establish a semantic hierarchy between the split sentences, generating a fine-grained representation of complex assertions in the form of hierarchically ordered and semantically interconnected propositions. Finally, we demonstrate that state-of-the-art Open IE systems benefit from using our TS approach as a pre-processing step by increasing both the accuracy and coverage of the extracted relational tuples for the majority of the Open IE approaches under consideration. In addition, we outline that the semantic hierarchy of simplified sentences can be leveraged to enrich the output of existing Open IE systems with additional meta information, thus transforming the shallow semantic representation of state-of-the-art approaches into a canonical context-preserving representation of relational tuples. KW - Text Simplification KW - Syntactic Simplification KW - Open Information Extraction KW - Semantic Representation KW - Complex Sentences Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10540 ER - TY - THES A1 - Mandarawi, Waseem T1 - Multi-objective Network Virtualization and its Applicability to Industrial Networks N2 - Network virtualization provides high flexibility for deploying communication services in dense and heterogeneous environments. Two main approaches (dimensions) that are usually combined exist: Network Function Virtualization (NFV) technologies for functionality virtualization and Virtual Network Embedding (VNE) algorithms for resource virtualization. These approaches can be applied to different network levels, such as factory and enterprise levels of industrial networks. Several objectives and constraints, that might be conflicting, shall be considered when network virtualization is applied, mainly in complex topologies. This thesis proposes a network virtualization model that considers both virtualization dimensions, two network levels, and different objectives and constraints. The network levels considered are two primary levels in industrial networks. However, this consideration does not restrict the model to a particular environment or certain levels. The considered objectivities/constraints are topology, reliability, security, performance, and resource usage. Based on this model, we first build an overall combined solution for autonomic and composite virtual networking. This solution considers both virtualization dimensions, two network levels, and target objectives. Furthermore, this solution combines three novel virtualization sub-approaches that consider performance, reliability, and performance. However, the sub-approaches apply to different combinations of levels and dimensions, and the reliability approach additionally considers the resource usage objective. After presenting all solutions, we map them to the defined model. Regarding applicability to industrial networks, the combined approach is applied to an enterprise-level Industrial Internet of Things (IIoT) use case inspired by the smart factory concept in Industry 4.0. However, the sub-approaches are applied to more specific use cases. The performance and reliability solutions are integrated with relevant components of the Time Sensitive Networks (TSN) standard as a modern technology for industrial networks. The goal is to enrich the reliability and performance capabilities of TSN with the flexibility of network virtualization. In the combined approach, we compose and embed an environment-aware Extended Virtual Network (EVN) that represents the physical devices, virtual application functions, and required Service Function Chains (SFCs). We use the graph transformation method to transform abstract application requirements (represented by an Application Request (AR)) into an EVN. Both EVN composition and embedding methods consider the Substrate Network (SN) topology and different security, reliability, performance, and resource usage policies. These policies are applied with a certain priority and depend on the properties of communicating entities such as location and type. The EVN is embedded using property-based node mapping, reliability-aware branching, and a greedy chain embedding heuristic. The chain embedding heuristic is evaluated using a random topology that represents the use case. The performance sub-approach is NFV-based and is applied to a specific use case with Time-critical Traffic (TCT) flows. We develop and evaluate a complete framework for virtualizing Time-aware Shaper (TAS) using high-performance NFV. The reliability sub-approach is VNE-based and is applied to a specific factory level use case. We develop minimal and maximal branching heuristics based on a reliability-aware k-shortest path algorithm and compare them using a typical factory topology. We then integrate these algorithms with a Frame Replication and Elimination for Reliability (FRER) simulator to realize reliability policies by the autonomic and efficient configuration of a supporting technology. The security sub-approaches are related to both virtualization dimensions and are applied to generic enterprise-level use cases. However, the applicability of the security aspect to industrial networks is only shown in the combined (EVN) approach and its use case. We research the autonomic security management in Network Function Virtualization Infrastructure (NFVI) with the main goal of early reaction to threats through SFC reconfiguration through Virtual Network Function (VNF) live migration. This goal is approached by supporting the security measurements with a decision making architecture that considers, on the one hand, the threats and events in the environment and, on the other hand, the Service Level Agreement (SLA) between the NFVI provider and user. For this purpose, we classify the VNF-specific attacks and define possible early detectable behavior patterns. Finally, we develop a security-aware VNE heuristic that considers the security requirements of the Virtual Network (VN) and the security capabilities of the SN. This approach is modified in the combined approach to consider deploying virtualized security VNFs. KW - Network Virtualization KW - Industrial Networks KW - Virtual Network Embedding KW - Network Function Virtualization KW - Time Sensitive Networking Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10606 ER - TY - THES A1 - Alyousef, Ammar T1 - E-Mobility Management: Towards a Grid-friendly Smart Charging Solution N2 - Replacing fossil-fueled vehicles with Electric Vehicles (EVs) poses new challenges for power distribution networks. Specifically speaking, the electrification of the mobility sector relies on the ability to process and analyze information on when, where, for how long, or how fast charging processes will take place. Nevertheless, such kind of information is typically difficult to acquire or insufficiently predictable due to the dynamic nature of the system. Also, the increasing adoption rate of the renewable energy sources, specifically the domestic Photovoltaic (PV) systems, and the potentially associated grid defection scenarios will significantly impact the cost and efforts required to operate the grid in terms of power quality and demand-supply aspects. However, such emerging requirements have arguably not been taken into account when the distribution grid was built originally. Besides, expanding the distribution and transmission capacity is a very costly and lengthy process. Therefore, any proposed solution should be cost-effective as well as environment-, grid- and user-friendly. To this end, the advancements in Information and Communications Technology (ICT) are increasingly adopted and applied. This thesis addresses the rapidly growing EV sector and deals with the problems to overcome potential power quality degradation caused by the challenges mentioned above. Since time switch and radio ripple control as existing solutions in Germany are costly and neither very effective nor scalable as it requires hardware retrofitting of existing public Charging Stations (CSs), the primary focus of this work is the development of an appropriate, standards-based, scalable, and smart charging solution of EVs. Such a solution can, in turn, boost the usage of renewable energy by ensuring that the existing grid infrastructure can operate within its permissible limits while maintaining acceptable levels of power quality. This work introduces a new definition of the concept, “grid-friendly EV charging”, where the power demand of a CS is adjusted depending on the real-time status of a power grid. In this regard, the conflicting concerns of stakeholders in an EV ecosystem are considered. For example, a Distribution System Operator (DSO) does not want to reveal a lot of technical details about the power grid or its status. Similarly, a Charging Service Provider (CSP) wants to keep its clients happy without sharing the details of its business model with others, namely, DSOs. For that sake, a distributed smart charging architecture is proposed in this thesis. It is event-driven and responds in nearly real-time to unforeseen and critical grid situations such as high/low voltage, congestion, phase unbalance, and harmonics. In that regard, the publish/subscribe messaging pattern, used as a part of the architecture, enables an efficient and well-performing communication scheme among the different components. Moreover, an indication mechanism about the different issues in a power grid is developed; it adopts the traffic light model. It works as a black box to separate smart controllers for each CS and configured only by the CSP. Smart chargers enable a smooth adjustment of the charging power to avoid drastic changes in the grid state. To that end, two types of intelligent controllers are developed and tested. While the first controller is inspired by the fuzzy logic, the second one is inspired by the slow-start mechanism used in TCP to control congestion in computer networks. A simulative approach is applied to evaluate the solution, thereby, a topology of a real low voltage grid with realistic load and generation profiles is used. Furthermore, a set of metrics is defined regarding the main concerns of stakeholders: voltage, overloading, fairness, the satisfaction of EV users and grid operator, as well as the grid-friendly behavior of a CS/ EV user. The evaluation shows that the solution is able to guarantee a safe operation of the grid. The proposed system can ensure a grid-friendly charging by sacrificing of a small portion of user satisfaction, that sacrifice of a user is awarded via a points-based reward system. Last but not least, the proposed distributed controllers are compared to two other controllers: (1) a decentralized controller based only on sensing the local voltage and (2) a very strict centralized controller focusing on grid-friendliness. The latter ensures proportional fairness among users regarding the objective function of the optimization problem solved in each simulation step. The distributed controllers are superior to the decentralized controller in terms of grid friendly and fairness and converge in general to the centralized one. KW - E-Mobility KW - Smart Charging KW - Grid-Friendliness KW - Elektromobilität KW - Lademanagement KW - Netzstabilität Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9302 ER - TY - THES A1 - Niedermeier, Florian T1 - Power-Adaptive Computing in Future Energy Networks N2 - The current electricity grid is undergoing major changes. There is increasing pressure to move away from power generation from fossil fuels, both due to ecological concerns and fear of dependencies on scarce natural resources. Increasing the share of decentralized generation from renewable sources is a widely accepted way to a more sustainable power infrastructure. However, this comes at the price of new challenges: generation from solar or wind power is not controllable and only forecastable with limited accuracy. To compensate for the increasing volatility in power generation, exerting control on the demand side is a promising approach. By providing flexibility on demand side, imbalances between power generation and demand may be mitigated. This work is concerned with developing methods to provide grid support on demand side while limiting the associated costs. This is done in four major steps: first, the target power curve to follow is derived taking both goals of a grid authority and costs of the respective load into account. In the following, the special case of data centers as an instance of significant loads inside a power grid are focused on more closely. Data center services are adapted in a way such as to achieve the previously derived power curve. By means of hardware power demand models, the required adaptation of hardware utilization can be derived. The possibilities of adapting software services are investigated for the special use case of live video encoding. A method to minimize quality of experience loss while reducing power demand is presented. Finally, the possibility of applying probabilistic model checking to a continuous demand-response scenario is demonstrated. KW - Power-adaptive software KW - Energy systems KW - Energieversorgung KW - Software Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9993 ER - TY - THES A1 - Ansah, Frimpong T1 - Performance and optimization technologies for software defined industrial networks N2 - The concept of programmable networks is radically changing the way communication infrastructures are designed, integrated, and operated. Currently, the topic is spearheaded by concepts such as software-defined networking, forwarding and control element separation, and network function virtualization. Notably, software-defined networking has attracted significant attention in telecommunication and data centers and thus already in some production-grade networks. Despite the prevalence of software-defined networking in these domains, industrial networks are yet to see its benefits to encourage adoption. However, the misconceptions around the concept itself, the role of virtualization, and algorithms pose a significant obstacle. Furthermore, the desire to accommodate new services in the automation industry results in a pattern of constantly increasing complexity of industrial networks, which is compounded by the requirement to provide stringent deterministic service guarantees considering characteristically different applications and thus posing a significant challenge for management, configuration, and maintenance as existing solutions are architecturally inflexible. Therefore, the first contribution of this thesis addresses the misconceptions around software-defined networking by providing a comparative analysis of programmable network concepts, detailing where software-defined networks compare with other concepts and how its principles can be leveraged to evolve industrial networks. Armed with the fundamental principles of programmable networks, the second contribution identifies virtualization technologies and proposes novel algorithms to provide varied quality of service guarantees on converged time-sensitive Ethernet networks using software-defined networking concepts. Finally, a performance analysis of a software-defined hybrid deployment solution for control and management of time-sensitive Ethernet networks that integrates proposed novel algorithms is presented as an industrial use-case that enables industrial operators to harness the full potential of time-sensitive networks. KW - Performance KW - Software Defined Industrial Networks KW - Virtual Network Embedding KW - Schedulability Analysis KW - Worst-case Delay Analysis KW - Software Defined Industrial Networks KW - Deterministic Petri-net and Queuing networks KW - Virtual Network Embedding and Worst-case Delay Analysis KW - Schedulability Analysis KW - Performance Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9002 PB - Universität Passau CY - Passau ER - TY - THES A1 - Lang, Thomas T1 - AI-Supported Interactive Segmentation of 3D Volumes N2 - The segmentation of volumetric datasets, i.e., the partitioning of the data into disjoint sub-volumes with the goal to extract information about these regions,is a difficult problem and has been discussed in medical imaging for decades. Due to the ever-increasing imaging capabilities, in particular in X-ray computed tomography (CT) or magnetic resonance imaging, segmentation in industrial applications also gains interest. Especially in industrial applications the generated datasets increase in size. Hence, most applications apply well-known techniques in a 2+1-dimensional manner,i.e., they apply image segmentation procedures on each slice separately and track the progress along the axis of the volume in which the slices are stacked on. This discards the information on preceding or subsequent slices, which is often assumed to be nearly identical. However, in the industrial context this might prove wrong since industrial parts might change their appearance significantly over the course of even a few slices. Moreover, artifacts can further distort the content of the slices. Therefore, three-dimensional processing of voxel volumes has to be preferred, which induces constraints upon the segmentation procedures. For example, they must not consider global information as it is usually not feasible in big scans to compute them efficiently. Yet another frequent problem is that applications focus on individual parts only and algorithms are tailored to that case. Most prominent medical segmentation procedures do so by applying methods to specifically find the liver and only the liver of a patient, for example. The implication is that the same method then cannot be applied to find other parts of the scan and such methods have to be designed individually for any object to be segmented. Flexible segmentation methods are needed too specifically when partitioning unique scans. We define a unique scan to be a voxel dataset for which no comparable volume exists. Classical examples include the use case of cultural heritage where not only the objects themselves are unique but also scan parameters are optimized to obtain the best image quality possible for that specific scan. This thesis aims at introducing novel methods for voxelwise classifications based on local geometric features. The latter are computed from local environments around each voxel and extract information in similar ways as humans do, namely by observing their similarity to geometric or textural primitives. These features serve as the foundation to learning the proposed voxelwise classifiers and to discriminate between segmented and unsegmented voxels. On the one hand, they perform fully automated clustering of volumes for which a representative random sample is extracted first. On the other hand, a set of segmenting classifiers can be trained from few seed voxels, i.e., volume elements for which a domain expert marked if they belong to the components that shall be segmented. The interactive selection offers the advantage that no completely labeled voxel volumes are necessary and hence that unique scans of objects can be segmented for which no comparable scans exist. Overall, it will be shown that all proposed segmentation methods are effectively of linear runtime with respect to the number of voxels in the volume. Thus, voxel volumes without size restrictions can be segmented in an efficient linear pass through the volume. Finally, the segmentation performance is evaluated on selected datasets which shows that the introduced methods can achieve good results on scans from a broad variety of domains for both small and big voxel volumes. N2 - Die Segmentierung von Volumendaten, also die Partitionierung der Daten in disjunkte Teilvolumen zur weiteren Informationsextraktion, ist ein Problem, welches in der medizinischen Bildverarbeitung seit Jahrzehnten behandelt wird. Bedingt durch die sich ständig verbessernden Bilderfassungsmethoden, speziell im Bereich der Röntgen-Computertomographie (CT) oder der Magnetresonanztomographie, gewinnt die Segmentierung von industriellen Volumendaten auch an Wichtigkeit. Insbesondere im industriellen Kontext steigt die Größe der zu segmentierenden Daten jedoch rasant an, so dass sich die meisten Segmentierungsapplikationen auf den 2+1-dimensionalen Fall beschränken, also Bilder verarbeiten und die Ergebnisse über mehrere Bilder hinweg verfolgen. Jedoch werden somit beispielsweise geometrische Informationen über benachbarte Schichten ignoriert. Diese können sich aber gerade im industriellen Bereich signifikant ändern. Aus diesem Grund ist hier die dreidimensionale Bildverarbeitung vorzuziehen. Dadurch ergeben sich neue Einschränkungen, beispielsweise können keine globalen Informationen zur Segmentierung herangezogen werden, da diese typischerweise nicht effizient berechenbar sind. Ferner fokussieren sich dreidimensionale Methoden aus medizinischen Bereichen zumeist auf bestimmte Bestandteile der Daten, wie einzelne Organe. Dies schränkt die Generalität dieser Methoden signifikant ein und somit sind separate Verfahren für jedes zu segmentierende Objekt notwendig. Flexible Methoden sind darüber hinaus bei Anwendung auf einzigartige Scans erforderlich. Ein einzigartiger Scan ist ein Voxelvolumen, für welches kein vergleichbares Datum existiert. Klassische Beispiele sind Kulturgutdigitalisate, da dort nicht nur die Objekte einzigartig sind, sondern auch die Aufnahmeparameter spezifisch für diesen einen Scan optimiert wurden. Die vorliegende Dissertation führt neuartige Methoden zur voxelweisen dreidimensionalen Segmentierung von Volumendaten auf Basis lokaler geometrischer Informationen ein. Die Bewertung dieser Informationen imitiert die menschliche Objektwahrnehmung, indem lokale Regionen mit geometrischen oder strukturellen Primitiven verglichen werden. Mit Hilfe dieser Bewertungen werden voxelweise anzuwendende Klassifikatoren trainiert, welche zwischen erwünschten und unerwünschten Voxeln unterscheiden sollen. Ein Teil dieser Klassifikatoren führt eine vollautomatische Clustering-Analyse durch, nachdem eine repräsentative und zufällig ausgewählte Teilmenge fester Größe an Voxeln selektiert wurde. Die verbliebenen Segmentierungsalgorithmen erhalten Trainingsdaten in Form von Seed-Voxeln, also wenige Volumenelemente, die von einem Domänenexperten markiert wurden. Diese interaktive Herangehensweise ermöglicht das Einbringen von Expertenwissen ohne die Notwendigkeit vollständig annotierter Trainingsvolumen, wodurch auch einzigartige Scans segmentiert werden können. Für alle Verfahren wird dargelegt, dass die eingeführten Algorithmen von asymptotisch linearer Laufzeit in der Anzahl der Voxel im Volumen sind. Somit können Voxeldaten ohne Größenbeschränkungen in einem effizienten linearen Durchgang verarbeitet werden. Abschließend wird die Performanz der vorgestellten Verfahren auf ausgewählten Daten evaluiert und aufgezeigt, dass mit denselben wenigen Verfahren gute Ergebnisse auf vielen unterschiedlichen Domänen und gleichfalls auf kleinen und großen Volumen erzielt werden können. KW - Segmentation KW - Computed Tomography KW - Artificial Intelligence KW - Active Learning KW - Interactive KW - Machine learning KW - Image processing Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9221 ER - TY - THES A1 - Salehi Rizi, Fatemeh T1 - Graph Representation Learning for Social Networks N2 - Online social networks provide a rich source of information about millions of users worldwide. However, due to sparsity and complex structure, analyzing these networks is quite challenging and expensive. Recently, graph embedding emerged to map networked data into low-dimensional representations, i.e. vector embeddings. These representations are fed into off-the-shelf machine learning algorithms to simplify and speed up graph analytic tasks. Given the immense importance of social network analysis, in this thesis, we aim to study graph embedding for social networks in three directions. Firstly, we focus on social networks at microscopic level to primarily encode the structural characteristic of users' personal networks so-called ego networks. These representations are utilized in evaluation tasks whose performance depends on relational information from direct neighbors. For example, social circle prediction and event attendance inference both need structural information from neighbors in social networks. Secondly, we explore assessing the content of vector embeddings in terms of topological properties. This could be explained via two proposed approaches: 1) a learning to rank algorithm in which the model weights reveal the importance of properties at subgraph level (ego networks), 2) a regression model for direct approximation of network statistical properties at vertex level. Thirdly, we propose extensions of graph embedding to capture sign or additional content of social networks. Users in social media often express their feelings and attitudes towards others which forms sentiment links besides social links. We design a joint objective function whose terms capture semantics of both social and sentiment links simultaneously. We also propose a multi-task learning framework for networks with attributes and labels by stacking autoencoders. The weights of the learning tasks are automatically assigned via an adaptive loss weighting layer. Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9211 ER - TY - JOUR A1 - Basmadjian, Robert T1 - Flexibility-Based Energy and Demand Management in Data Centers BT - a Case Study for Cloud Computing JF - Energies N2 - The power demand (kW) and energy consumption (kWh) of data centers were augmenteddrastically due to the increased communication and computation needs of IT services. Leveragingdemand and energy management within data centers is a necessity. Thanks to the automated ICTinfrastructure empowered by the IoT technology, such types of management are becoming more feasiblethan ever. In this paper, we look at management from two different perspectives: (1) minimization of theoverall energy consumption and (2) reduction of peak power demand during demand-response periods.Both perspectives have a positive impact on total cost of ownership for data centers. We exhaustivelyreviewed the potential mechanisms in data centers that provided flexibilities together with flexiblecontracts such as green service level and supply-demand agreements. We extended state-of-the-artby introducing the methodological building blocks and foundations of management systems for theabove mentioned two perspectives. We validated our results by conducting experiments on a lab-gradescale cloud computing data center at the premises of HPE in Milano. The obtained results support thetheoretical model, by highlighting the excellent potential of flexible service level agreements in Green IT:33% of overall energy savings and 50% of power demand reduction during demand-response periods inthe case of data center federation. Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9251 VL - 2019 IS - 12 SP - 1 EP - 22 PB - MDPI CY - Basel ER -