TY - THES A1 - Koop, Martin T1 - Preventing the Leakage of Privacy Sensitive User Data on the Web N2 - Das Aufzeichnen der Internetaktivität ist mit der Verknüpfung persönlicher Daten zu einer Schlüsselressource für viele kostenpflichtige und kostenfreie Dienste im Web geworden. Diese Dienste sind zum einen Webanwendungen, wie beispielsweise die von Google bereitgestellten Karten/Navigation oder Websuche, die täglich kostenlos verwendet werden. Zum anderen sind es alle Webseiten, die meist kostenlos Nachrichten oder allgemeine Informationen zu verschiedenen Themen bereitstellen. Durch das Aufrufen und die Nutzung dieser Webdienste werden alle Informationen, die im Webdienst verarbeitet werden, an den Dienstanbieter weitergeben. Dies umfasst nicht nur die im Benutzerkonto des Webdienstes gespeicherte Profildaten wie Name oder Adresse, sondern auch die Aktivität mit dem Webdienst wie das anklicken von Links oder die Verweildauer. Darüber hinaus gibt es jedoch auch unzählige Drittparteien, welche zumeist im Hintergrund in die Webdienste eingebunden sind und das Benutzerverhalten der kompletten Webaktivität - Webseiten übergreifend - mitspeichern sowie auswerten. Der Einsatz verschiedener, in der Regel für den Benutzer verborgener Techniken, dient dazu das Online-Verhalten der Benutzer genau zu verfolgen und viele sensible Daten zu sammeln. Dieses Verhalten wird als Web-Tracking bezeichnet und wird hauptsächlich von Werbeunternehmen genutzt. Die gesammelten Daten sind oft personenbezogen und eine wertvolle Ressourcen der Unternehmen, um Beispielsweise passend zum Benutzerprofil personalisierte Werbung schalten zu können. Mit der Nutzung dieser personenbezogenen Daten entstehen aber auch weitreichendere Auswirkungen, welche sich unter anderem in Preisanpassungen für Benutzer mit speziellen Profilattributen, wie der Nutzung von teuren Endgeräten, widerspiegeln. Ziel dieser Arbeit ist es die Privatsphäre der Nutzer im Internet zu steigern und die Nutzerverfolgung von Web-Tracking signifikant zu reduzieren. Dabei stellen sich vier Herausforderungen, die jeweils einen Forschungsschwerpunkt dieser Arbeit bilden: (1) Systematische Analyse und Einordnung eingesetzter Tracking-Techniken, (2) Untersuchung vorhandener Schutzmechanismen und deren Schwachstellen,(3) Konzeption einer Referenzarchitektur zum Schutz vor Web-Tracking und (4) Entwurf einer automatisierten Testumgebungen unter Realbedingungen, um die Reduzierung von Web-Tracking in den entwickelten Schutzmaßnahmen zu untersuchen. Jeder dieser Forschungsschwerpunkte stellt neue Beiträge bereit, um einheitlich das übergeordnete Ziel zu erreichen: der Entwicklung von Schutzmaßnahmen gegen die Preisgabe sensibler Benutzerdaten im Internet. Der erste wissenschaftliche Beitrag dieser Dissertation ist eine umfassende Evaluation eingesetzter Web-Tracking Techniken und Methoden, sowie deren Gefahren, Risiken und Implikationen für die Privatsphäre der Internetnutzer. Die Evaluation beinhaltet zusätzlich die Untersuchung vorhandener Tracking-Schutzmechanismen und deren Schwachstellen. Die gewonnenen Erkenntnisse sind maßgeblich für die in dieser Arbeit neu entwickelten Ansätze und verbessern den bisherigen nicht hinreichend gewährleisteten Schutz vor Web-Tracking. Der zweite wissenschaftliche Beitrag ist die Entwicklung einer robusten Klassifizierung von Web-Tracking, der Entwurf einer effizienten Architektur zur Langzeituntersuchung von Web-Tracking sowie einer interaktiven Visualisierung des Auftreten von Web-Tracking im Internet. Dabei basiert der neue Klassifizierungsansatz, um Tracking zu identifizieren, auf der Entropie Messung des Informationsgehalts von Cookies. Die Resultate der Web-Tracking Langzeitstudien sind unter anderem 1.209 identifizierte Tracking-Domains auf den meistbesuchten Webseiten in Deutschland. Hierbei wurden innerhalb der Top 25 Webseiten im Durchschnitt 45 Tracking-Elemente pro Webseite gefunden. Der Tracker mit dem höchsten Potenzial zum Erstellen eines Benutzerprofils war doubleclick.com, da er 90% der Webseiten überwacht. Die Auswertung des untersuchten Tracking-Netzwerks ergab weiterhin einen detaillierten Einblick in die Tracking-Technik mithilfe von Weiterleitungslinks. Dabei haben wir 1,2 Millionen HTTP-Traces von monatelangen Crawls der 50.000 international meistbesuchten Webseiten analysiert. Die Ergebnisse zeigen, dass 11,6% dieser Webseiten HTTP-Redirects, verborgen in Webseiten-Links, zum Tracken verwenden. Dies wird eingesetzt, um den Webseitenverlauf des Benutzers nach dem Klick durch eine Kette von (Tracking-)Servern umzuleiten, welche in der Regel nicht sichtbar sind, bevor das beabsichtigte Link-Ziel geladen wird. In diesem Szenario erfasst der Tracker wertvolle Verbindungs-Metadaten zu Inhalt, Thema oder Benutzerinteressen der Website. Die Visualisierung des Tracking Ökosystem stellen wir in einem interaktiven Open-Source Web-Tool bereit. Der dritte wissenschaftliche Beitrag dieser Dissertation ist die Konzeption von zwei neuartigen Schutzmechanismen gegen Web-Tracking und der Aufbau einer automatisierten Simulationsumgebung unter Realbedingungen, um die Effektivität der Umsetzungen zu verifizieren. Der Fokus liegt auf den beiden meist verwendeten Tracking-Verfahren: Cookies (hierbei wird eine eindeutigen ID auf dem Gerät des Benutzers gespeichert), sowie Browser-Fingerprinting. Letzteres beschreibt eine Methode zum Sammeln einer Vielzahl an Geräteeigenschaften, um den Benutzer eindeutig zu (re- )identifizieren, ohne eine eindeutige ID auf dem Gerät zu speichern. Um die Effektivität der in dieser Arbeit entwickelten Schutzmechanismen vor Web-Tracking zu untersuchen, implementierten und evaluierten wir die Schutzkonzepte direkt im Chromium Browser. Das Ergebnis zeigt eine erfolgreiche Reduzierung von Web-Tracking um 44%. Zusätzlich verbessert das in dieser Arbeit entwickelte Konzept “Site Isolation” den Datenschutz des privaten Browsing-Modus, ermöglicht das Setzen eines manuellen Speicher-Zeitlimits von Cookies und schützt den Browser gegen verschiedene Bedrohungen wie CSRF (Cross-Site Request Forgery) oder CORS (Cross-Origin Ressource Sharing). Site Isolation speichert dabei den Status der lokalen Website in separaten Containern und kann dadurch diverse Tracking-Methoden wie Cookies, lokalStorage oder redirect tracking verhindern. Bei der Auswertung von 1,6 Millionen Webseiten haben wir gezeigt, dass der Tracker doubleclick.com das höchste Potenzial besitzt, den Nutzer zu verfolgen und auf 25% der 40.000 international meistbesuchten Webseiten vertreten ist. Schließlich demonstrieren wir in unserem erweiterten Chromium-Browser einen robusten Browser-Fingerprinting-Schutz. Der Test unseres Prototyps mittels 70.000 Browsersitzungen zeigt, dass unser Browser den Nutzer vor sogenanntem Browser-Fingerprinting Tracking schützt. Im Vergleich zu fünf anderen Browser-Fingerprint-Tools erzielte unser Prototyp die besten Ergebnisse und ist der erste Schutzmechanismus gegen Flash sowie Canvas Fingerprinting. KW - Web Tracking, Cookies, Browser Fingerprinting, Redirects, Site Isolation KW - Datenschutz KW - Computersicherheit KW - Objektverfolgung Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8717 ER - TY - THES A1 - Kasinathan, Prabhakaran T1 - Workflow-aware access control for the Internet of Things N2 - IoT is defined as a paradigm where "things" have sensing, actuating, communicating, and self-configuring abilities, and are connected to each other and to the Internet. Recent advancements in the manufacturing industry have helped to produce embedded devices with various sensors and actuators in mass numbers at a reduced cost. As part of the IoT revolution, everyday devices such as television, refrigerator, cars, even industrial machines are now connected IoT devices. Recent studies have predicted that by 2025 there will be over 75 billion of such IoT devices connected to the Internet. The providers of IoT based services want to integrate their services to satisfy customer requirements. For example, in the mobility scenario, different mobility solution providers want to offer a multi-modal ticket to their customers jointly. In such a distributed and loosely coupled environment, each owner and stakeholder wants to secure his/her own integrity, confidentiality, and functionality goals. This means that distributed rules and conditions defined by the individual owners must be enforced on the participating entities (e.g., customers or partners using their services). The owners and stakeholders may not necessarily trust each other's actions. Therefore, a mechanism is required that guarantees the rules and conditions specified by the different owners. Attacks on IoT devices and similar computing systems are increasing and getting more advanced. IoT devices are often constrained, i.e., they have limited processing power, memory, and energy. Security mechanisms designed for traditional computing systems, e.g., computers, servers, or mobile computing devices such as smartphones, may not fit in those constrained IoT devices. Weak security mechanisms and unenforced security measures were one of the main reasons for recent successful attacks on IoT devices and services. As IoT is now used in many sensitive places, including critical infrastructures, securing them becomes more critical than ever. This thesis focuses on developing mechanisms that secure IoT devices and services and enforcing the rules and conditions specified by the owners on entities that want to access owners' resources. In classical computer systems, security automata are used for specifying security policies and monitoring mechanisms are used for enforcing such policies. For instance, a reference monitor observes and stops the execution when the security policies are about to be violated, thus, the security policies are enforced. To restrict the adversary from using protected IoT devices or services for malicious purposes, it is required to ensure that a workflow must be followed to access the protected resource. In distributed IoT systems where the policies are governed by different owners, each owner would like to specify their rules and conditions in their workflows. The workflows contain tasks that must be performed in a particular order. The goal of this thesis is to develop mechanisms to specify and enforce these workflows in the distributed IoT environment. This thesis introduces a distributed WFAC framework that restricts the entities to do only what they are allowed to do in a collaborative environment. To gain access to a service protected by the WFAC framework, every workflow participant must prove that he/she is in a particular state of an authorized workflow. Authorized means two things: (a) the owner has authorized the workflow to be executed; (b) the workflow participant is authorized to execute it. This restricts the adversary's access to the devices and its services. The security policies defined by different owners are modeled as workflows and specified using Petri Nets. The policies are then enforced with the help of the WFAC framework which supports error-handling, accountability, integration of practitioner-friendly tools, and interoperability with existing security mechanisms such as OAuth. Thus, the WFAC guarantees the integrity of workflows in a distributed environment. KW - Workflow-Aware Access Control for the Internet of Things KW - Petri Nets, Blockchain, Security Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8915 ER - TY - THES A1 - Alshawish, Ali T1 - Risk-based Security Management in Critical Infrastructure Organizations N2 - Critical infrastructure and contemporary business organizations are experiencing an ongoing paradigm shift of business towards more collaboration and agility. On the one hand, this shift seeks to enhance business efficiency, coordinate large-scale distribution operations, and manage complex supply chains. But, on the other hand, it makes traditional security practices such as firewalls and other perimeter defenses insufficient. Therefore, concerns over risks like terrorism, crime, and business revenue loss increasingly impose the need for enhancing and managing security within the boundaries of these systems so that unwanted incidents (e.g., potential intrusions) can still be detected with higher probabilities. To this end, critical infrastructure organizations step up their efforts to investigate new possibilities for actively engaging in situational awareness practices to ensure a high level of persistent monitoring as well as on-site observation. Compliance with security standards is necessary to ensure that organizations meet regulatory requirements mostly shaped by a set of best practices. Nevertheless, it does not necessarily result in a coherent security strategy that considers the different aims and practical constraints of each organization. In this regard, there is an increasingly growing demand for risk-based security management approaches that enable critical infrastructures to focus their efforts on mitigating the risks to which they are exposed. Broadly speaking, security management involves the identification, assessment, and evaluation of long-term (or overall) objectives and interests as well as the means of achieving them. Due to the critical role of such systems, their decision-makers tend to enhance the system resilience against very unpleasant outcomes and severe consequences. That is, they seek to avoid decision options associated with likely extreme risks in the first place. Practically speaking, this risk attitude can significantly influence the decision-making process in such critical organizations. Towards incorporating the aversion to extreme risks into security management decisions, this thesis investigates thoroughly the capabilities of a recently emerged theory of games with payoffs that are probability distributions. Unlike traditional optimization techniques, this theory provides an alternative decision technique that is more robust to extreme risks and uncertainty. Furthermore, this thesis proposes a new method that gives a decision maker more control over the decision-making process through defining loss regions with different importance levels according to people's risk attitudes. In this way, the static decision analysis used in the distribution-valued games is transformed into a dynamic process to adapt to different subjective risk attitudes or account for future changes in the decision caused by a learning process or other changes in the context. Throughout their different parts, this thesis shows how theoretical models, simulation, and risk assessment models can be combined into practical solutions. In this context, it deals with three facets of security management: allocating limited security resources, prioritizing security actions, and tweaking decision making. Finally, the author discusses experiences and limitations distilled from this research and from investigating the new theory of games, which can be taken into account in future approaches. KW - Security Management KW - Game Theory KW - Critical Infrastructures KW - Risk Attitude KW - Uncertainty KW - Spieltheorie KW - Risikomanagement Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10026 ER - TY - THES A1 - Alyousef, Ammar T1 - E-Mobility Management: Towards a Grid-friendly Smart Charging Solution N2 - Replacing fossil-fueled vehicles with Electric Vehicles (EVs) poses new challenges for power distribution networks. Specifically speaking, the electrification of the mobility sector relies on the ability to process and analyze information on when, where, for how long, or how fast charging processes will take place. Nevertheless, such kind of information is typically difficult to acquire or insufficiently predictable due to the dynamic nature of the system. Also, the increasing adoption rate of the renewable energy sources, specifically the domestic Photovoltaic (PV) systems, and the potentially associated grid defection scenarios will significantly impact the cost and efforts required to operate the grid in terms of power quality and demand-supply aspects. However, such emerging requirements have arguably not been taken into account when the distribution grid was built originally. Besides, expanding the distribution and transmission capacity is a very costly and lengthy process. Therefore, any proposed solution should be cost-effective as well as environment-, grid- and user-friendly. To this end, the advancements in Information and Communications Technology (ICT) are increasingly adopted and applied. This thesis addresses the rapidly growing EV sector and deals with the problems to overcome potential power quality degradation caused by the challenges mentioned above. Since time switch and radio ripple control as existing solutions in Germany are costly and neither very effective nor scalable as it requires hardware retrofitting of existing public Charging Stations (CSs), the primary focus of this work is the development of an appropriate, standards-based, scalable, and smart charging solution of EVs. Such a solution can, in turn, boost the usage of renewable energy by ensuring that the existing grid infrastructure can operate within its permissible limits while maintaining acceptable levels of power quality. This work introduces a new definition of the concept, “grid-friendly EV charging”, where the power demand of a CS is adjusted depending on the real-time status of a power grid. In this regard, the conflicting concerns of stakeholders in an EV ecosystem are considered. For example, a Distribution System Operator (DSO) does not want to reveal a lot of technical details about the power grid or its status. Similarly, a Charging Service Provider (CSP) wants to keep its clients happy without sharing the details of its business model with others, namely, DSOs. For that sake, a distributed smart charging architecture is proposed in this thesis. It is event-driven and responds in nearly real-time to unforeseen and critical grid situations such as high/low voltage, congestion, phase unbalance, and harmonics. In that regard, the publish/subscribe messaging pattern, used as a part of the architecture, enables an efficient and well-performing communication scheme among the different components. Moreover, an indication mechanism about the different issues in a power grid is developed; it adopts the traffic light model. It works as a black box to separate smart controllers for each CS and configured only by the CSP. Smart chargers enable a smooth adjustment of the charging power to avoid drastic changes in the grid state. To that end, two types of intelligent controllers are developed and tested. While the first controller is inspired by the fuzzy logic, the second one is inspired by the slow-start mechanism used in TCP to control congestion in computer networks. A simulative approach is applied to evaluate the solution, thereby, a topology of a real low voltage grid with realistic load and generation profiles is used. Furthermore, a set of metrics is defined regarding the main concerns of stakeholders: voltage, overloading, fairness, the satisfaction of EV users and grid operator, as well as the grid-friendly behavior of a CS/ EV user. The evaluation shows that the solution is able to guarantee a safe operation of the grid. The proposed system can ensure a grid-friendly charging by sacrificing of a small portion of user satisfaction, that sacrifice of a user is awarded via a points-based reward system. Last but not least, the proposed distributed controllers are compared to two other controllers: (1) a decentralized controller based only on sensing the local voltage and (2) a very strict centralized controller focusing on grid-friendliness. The latter ensures proportional fairness among users regarding the objective function of the optimization problem solved in each simulation step. The distributed controllers are superior to the decentralized controller in terms of grid friendly and fairness and converge in general to the centralized one. KW - E-Mobility KW - Smart Charging KW - Grid-Friendliness KW - Elektromobilität KW - Lademanagement KW - Netzstabilität Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9302 ER - TY - THES A1 - Ansah, Frimpong T1 - Performance and optimization technologies for software defined industrial networks N2 - The concept of programmable networks is radically changing the way communication infrastructures are designed, integrated, and operated. Currently, the topic is spearheaded by concepts such as software-defined networking, forwarding and control element separation, and network function virtualization. Notably, software-defined networking has attracted significant attention in telecommunication and data centers and thus already in some production-grade networks. Despite the prevalence of software-defined networking in these domains, industrial networks are yet to see its benefits to encourage adoption. However, the misconceptions around the concept itself, the role of virtualization, and algorithms pose a significant obstacle. Furthermore, the desire to accommodate new services in the automation industry results in a pattern of constantly increasing complexity of industrial networks, which is compounded by the requirement to provide stringent deterministic service guarantees considering characteristically different applications and thus posing a significant challenge for management, configuration, and maintenance as existing solutions are architecturally inflexible. Therefore, the first contribution of this thesis addresses the misconceptions around software-defined networking by providing a comparative analysis of programmable network concepts, detailing where software-defined networks compare with other concepts and how its principles can be leveraged to evolve industrial networks. Armed with the fundamental principles of programmable networks, the second contribution identifies virtualization technologies and proposes novel algorithms to provide varied quality of service guarantees on converged time-sensitive Ethernet networks using software-defined networking concepts. Finally, a performance analysis of a software-defined hybrid deployment solution for control and management of time-sensitive Ethernet networks that integrates proposed novel algorithms is presented as an industrial use-case that enables industrial operators to harness the full potential of time-sensitive networks. KW - Performance KW - Software Defined Industrial Networks KW - Virtual Network Embedding KW - Schedulability Analysis KW - Worst-case Delay Analysis KW - Software Defined Industrial Networks KW - Deterministic Petri-net and Queuing networks KW - Virtual Network Embedding and Worst-case Delay Analysis KW - Schedulability Analysis KW - Performance Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9002 PB - Universität Passau CY - Passau ER - TY - THES A1 - Lang, Thomas T1 - AI-Supported Interactive Segmentation of 3D Volumes N2 - The segmentation of volumetric datasets, i.e., the partitioning of the data into disjoint sub-volumes with the goal to extract information about these regions,is a difficult problem and has been discussed in medical imaging for decades. Due to the ever-increasing imaging capabilities, in particular in X-ray computed tomography (CT) or magnetic resonance imaging, segmentation in industrial applications also gains interest. Especially in industrial applications the generated datasets increase in size. Hence, most applications apply well-known techniques in a 2+1-dimensional manner,i.e., they apply image segmentation procedures on each slice separately and track the progress along the axis of the volume in which the slices are stacked on. This discards the information on preceding or subsequent slices, which is often assumed to be nearly identical. However, in the industrial context this might prove wrong since industrial parts might change their appearance significantly over the course of even a few slices. Moreover, artifacts can further distort the content of the slices. Therefore, three-dimensional processing of voxel volumes has to be preferred, which induces constraints upon the segmentation procedures. For example, they must not consider global information as it is usually not feasible in big scans to compute them efficiently. Yet another frequent problem is that applications focus on individual parts only and algorithms are tailored to that case. Most prominent medical segmentation procedures do so by applying methods to specifically find the liver and only the liver of a patient, for example. The implication is that the same method then cannot be applied to find other parts of the scan and such methods have to be designed individually for any object to be segmented. Flexible segmentation methods are needed too specifically when partitioning unique scans. We define a unique scan to be a voxel dataset for which no comparable volume exists. Classical examples include the use case of cultural heritage where not only the objects themselves are unique but also scan parameters are optimized to obtain the best image quality possible for that specific scan. This thesis aims at introducing novel methods for voxelwise classifications based on local geometric features. The latter are computed from local environments around each voxel and extract information in similar ways as humans do, namely by observing their similarity to geometric or textural primitives. These features serve as the foundation to learning the proposed voxelwise classifiers and to discriminate between segmented and unsegmented voxels. On the one hand, they perform fully automated clustering of volumes for which a representative random sample is extracted first. On the other hand, a set of segmenting classifiers can be trained from few seed voxels, i.e., volume elements for which a domain expert marked if they belong to the components that shall be segmented. The interactive selection offers the advantage that no completely labeled voxel volumes are necessary and hence that unique scans of objects can be segmented for which no comparable scans exist. Overall, it will be shown that all proposed segmentation methods are effectively of linear runtime with respect to the number of voxels in the volume. Thus, voxel volumes without size restrictions can be segmented in an efficient linear pass through the volume. Finally, the segmentation performance is evaluated on selected datasets which shows that the introduced methods can achieve good results on scans from a broad variety of domains for both small and big voxel volumes. N2 - Die Segmentierung von Volumendaten, also die Partitionierung der Daten in disjunkte Teilvolumen zur weiteren Informationsextraktion, ist ein Problem, welches in der medizinischen Bildverarbeitung seit Jahrzehnten behandelt wird. Bedingt durch die sich ständig verbessernden Bilderfassungsmethoden, speziell im Bereich der Röntgen-Computertomographie (CT) oder der Magnetresonanztomographie, gewinnt die Segmentierung von industriellen Volumendaten auch an Wichtigkeit. Insbesondere im industriellen Kontext steigt die Größe der zu segmentierenden Daten jedoch rasant an, so dass sich die meisten Segmentierungsapplikationen auf den 2+1-dimensionalen Fall beschränken, also Bilder verarbeiten und die Ergebnisse über mehrere Bilder hinweg verfolgen. Jedoch werden somit beispielsweise geometrische Informationen über benachbarte Schichten ignoriert. Diese können sich aber gerade im industriellen Bereich signifikant ändern. Aus diesem Grund ist hier die dreidimensionale Bildverarbeitung vorzuziehen. Dadurch ergeben sich neue Einschränkungen, beispielsweise können keine globalen Informationen zur Segmentierung herangezogen werden, da diese typischerweise nicht effizient berechenbar sind. Ferner fokussieren sich dreidimensionale Methoden aus medizinischen Bereichen zumeist auf bestimmte Bestandteile der Daten, wie einzelne Organe. Dies schränkt die Generalität dieser Methoden signifikant ein und somit sind separate Verfahren für jedes zu segmentierende Objekt notwendig. Flexible Methoden sind darüber hinaus bei Anwendung auf einzigartige Scans erforderlich. Ein einzigartiger Scan ist ein Voxelvolumen, für welches kein vergleichbares Datum existiert. Klassische Beispiele sind Kulturgutdigitalisate, da dort nicht nur die Objekte einzigartig sind, sondern auch die Aufnahmeparameter spezifisch für diesen einen Scan optimiert wurden. Die vorliegende Dissertation führt neuartige Methoden zur voxelweisen dreidimensionalen Segmentierung von Volumendaten auf Basis lokaler geometrischer Informationen ein. Die Bewertung dieser Informationen imitiert die menschliche Objektwahrnehmung, indem lokale Regionen mit geometrischen oder strukturellen Primitiven verglichen werden. Mit Hilfe dieser Bewertungen werden voxelweise anzuwendende Klassifikatoren trainiert, welche zwischen erwünschten und unerwünschten Voxeln unterscheiden sollen. Ein Teil dieser Klassifikatoren führt eine vollautomatische Clustering-Analyse durch, nachdem eine repräsentative und zufällig ausgewählte Teilmenge fester Größe an Voxeln selektiert wurde. Die verbliebenen Segmentierungsalgorithmen erhalten Trainingsdaten in Form von Seed-Voxeln, also wenige Volumenelemente, die von einem Domänenexperten markiert wurden. Diese interaktive Herangehensweise ermöglicht das Einbringen von Expertenwissen ohne die Notwendigkeit vollständig annotierter Trainingsvolumen, wodurch auch einzigartige Scans segmentiert werden können. Für alle Verfahren wird dargelegt, dass die eingeführten Algorithmen von asymptotisch linearer Laufzeit in der Anzahl der Voxel im Volumen sind. Somit können Voxeldaten ohne Größenbeschränkungen in einem effizienten linearen Durchgang verarbeitet werden. Abschließend wird die Performanz der vorgestellten Verfahren auf ausgewählten Daten evaluiert und aufgezeigt, dass mit denselben wenigen Verfahren gute Ergebnisse auf vielen unterschiedlichen Domänen und gleichfalls auf kleinen und großen Volumen erzielt werden können. KW - Segmentation KW - Computed Tomography KW - Artificial Intelligence KW - Active Learning KW - Interactive KW - Machine learning KW - Image processing Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9221 ER - TY - THES A1 - Salehi Rizi, Fatemeh T1 - Graph Representation Learning for Social Networks N2 - Online social networks provide a rich source of information about millions of users worldwide. However, due to sparsity and complex structure, analyzing these networks is quite challenging and expensive. Recently, graph embedding emerged to map networked data into low-dimensional representations, i.e. vector embeddings. These representations are fed into off-the-shelf machine learning algorithms to simplify and speed up graph analytic tasks. Given the immense importance of social network analysis, in this thesis, we aim to study graph embedding for social networks in three directions. Firstly, we focus on social networks at microscopic level to primarily encode the structural characteristic of users' personal networks so-called ego networks. These representations are utilized in evaluation tasks whose performance depends on relational information from direct neighbors. For example, social circle prediction and event attendance inference both need structural information from neighbors in social networks. Secondly, we explore assessing the content of vector embeddings in terms of topological properties. This could be explained via two proposed approaches: 1) a learning to rank algorithm in which the model weights reveal the importance of properties at subgraph level (ego networks), 2) a regression model for direct approximation of network statistical properties at vertex level. Thirdly, we propose extensions of graph embedding to capture sign or additional content of social networks. Users in social media often express their feelings and attitudes towards others which forms sentiment links besides social links. We design a joint objective function whose terms capture semantics of both social and sentiment links simultaneously. We also propose a multi-task learning framework for networks with attributes and labels by stacking autoencoders. The weights of the learning tasks are automatically assigned via an adaptive loss weighting layer. Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9211 ER - TY - THES A1 - Schmid, Matthias T1 - Towards Storing 3D Model Graphs in Relational Databases N2 - The increasing relevance of massive graph data reinforces the need for adequate graph data management. While several graph database engines have been developed, the storage of graph data in a relational database management system, and therefore the seamless integration into existing information systems remains an open challenge. Motivated by the use case to integrate Building Information Modeling (BIM) data into the MonArch system, we propose a solution that transforms the BIM data into a property graph and stores this graph in the database system. We present a novel approach to efficiently store property graph data in a relational database management system using JSON functionality and redundant storage of edges in adjacency lists and show how to import huge data sets into this schema. Applying this approach, we import data sets of up to nearly 1 TB of disk space within the relational database, while only having 96 GB of main memory available. We also present a new approach of how to retrieve data from this database schema, translating queries written in the popular property graph query language Cypher into SQL. Hence, we provide an intuitive way to write semantically complex queries. We also demonstrate the efficiency of our approach using the standardized Linked Data Benchmark Council – Social Network Benchmark (LDBC - SNB) framework. Our approach increases the throughput for this benchmark by up to 85 times, compared to existing approaches for RDBMS. In addition, we propose a new method to transform BIM data into the property graph model and how to apply the aforementioned property graph storage to this data. We can import IFC models of up to 300 MB within five minutes. We show the suitability of our approach using our own use case specific benchmark, which we integrated into the previously mentioned Social Network Benchmark. For our interactive use case-specific queries, we achieve response times faster than 5 ms in 99% of all executions. Finally, we present how the aforementioned approach to store BIM data in a relational database management system is integrated into the existing MonArch system by splitting the different functionalities of our approach into a microservice architecture. N2 - Die steigende Relevanz von riesigen Graphdatenmengen verstärkt die Notwendigkeit von adäquatem Graphdaten Management. Während bereits mehrere Graphdatenbanken entwickelt wurden, bleibt die Speicherung von Graphdaten in relationalen Datenbanken und die damit verbundene nahtlose Integration in bereits existierende Informationssysteme eine ungelöste Herausforderung. Motiviert durch unseren eigenen Anwendungsfall Building Information Modeling (BIM)Daten in das MonArch Informationssystem zu integrieren, schlagen wir einen Ansatz vor, BIM Daten in eine Property Graph Form umzuwandeln und diesen in der Datenbank zu speichern. Um dies zu erreichen, stellen wir einen neuartigen Ansatz vor, um Property Graphen in einem relationalen Datenbanksystem zu speichern, indem wir Funktionalitäten wie JSON und die redundante Speicherung von Kanten in Adjazenzlisten kombinieren und zeigen wie große Mengen dieser Daten in das Schema importiert werden können. Durch die Anwendung unseres Ansatzes können wir Datensätze von bis zu 1 TB in das Datenbanksystem importieren, während wir nur 96 GB Hauptspeicher zur Verfügung haben. Wir stellen außerdem einen neuen Ansatz vor, um Daten aus dem zuvor genannten Schema abzufragen, indem wir die beliebte Graphanfragesprache Cypher in die Sprache SQL übersetzen. Dadurch erreichen wir eine intuitive Art semantisch komplexe Anfragen zu schreiben. Zusätzlich zeigen wir die Effizienz unseres Ansatzes, indem wir das standardisierte Evaluationsframework Social Network Benchmark des Linked Data Benchmar Council (LDBC – SNB) verwenden. Unser Ansatz erhöht den Durchsatz dieses Benchmarks, im Vergleich zu existierenden Ansätzen für relationale Datenbanksysteme, auf einen bis zu 85-fachen Durchsatz. Zusätzlich schlagen wir eine neue Methode vor, um BIM Daten in das Property Graph Modell zu übertragen und wie das zuvor vorgestellte Speichermodel verwendet werden kann, um diese Daten zu speichern. Damit können wir IFC Modelle mit bis zu 300 MB in unter 5 Minuten in unser System importieren. Schließlich zeigen wir die Eignung unseres Ansatzes, indem wir einen eigenen Benchmark spezifisch für unseren Anwendungsfall verwenden, welchen wir in den zuvor erwähnte Social Network Benchmark integriert haben. Für unsere anwendungsfallspezifischen Anfragen erreichen wir Antwortzeiten von unter 5 ms in 99% der Ausführungen. KW - Graph-based database models KW - Relational database model KW - Property graph KW - Industry Foundation Classes KW - IFC Store Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10353 ER -