TY - THES A1 - Lehner, Sabrina T1 - The Asymptotic Behaviour of the Riemann Mapping Function at Analytic Cusps N2 - The well-known Riemann Mapping Theorem states the existence of a conformal map of a simply connected proper domain of the complex plane onto the upper half plane. One of the main topics in geometric function theory is to investigate the behaviour of the mapping functions at the boundary of such domains. In this work, we always assume that a piecewise analytic boundary is given. Hereby, we have to distinguish regular and singular boundary points. While the asymptotic behaviour for regular boundary points can be investigated by using the Schwarz Reflection at analytic arcs, the situation for singular boundary points is far more complicated. In the latter scenario two cases have to be differentiated: analytic corners and analytic cusps. The first part of the thesis deals with the asymptotic behaviour at analytic corners where the opening angle is greater than 0. The results of Lichtenstein and Warschawski on the asymptotic behaviour of the Riemann map and its derivatives at an analytic corner are presented as well as the much stronger result of Lehman that the mapping function can be developed in a certain generalised power series which in turn enables to examine the o-minimal content of the Riemann Mapping Theorem. To obtain a similar statement for domains with analytic cusps, it is necessary to investigate the asymptotic behaviour of a Riemann map at the cusp and based on this result to determine the asymptotic power series expansion. Therefore, the aim of the second part of this work is to investigate the asymptotic behaviour of a Riemann map at an analytic cusp. A simply connected domain has an analytic cusp if the boundary is locally given by two analytic arcs such that the interior angle vanishes. Besides the asymptotic behaviour of the mapping function, the behaviour of its derivatives, its inverse, and the derivatives of the inverse are analysed. Finally, we present a conjecture on the asymptotic power series expansion of the mapping function at an analytic cusp. N2 - Der wohlbekannte Riemannsche Abbildungssatz liefert die Existenz einer konformen Abbildung eines einfach zusammenhängenden, echten Teilgebietes der komplexen Ebene auf die obere Halbebene. Die Untersuchung des Verhaltens solcher Abbildungen am Rand der Gebiete ist ein zentrales Thema der geometrischen Funktionentheorie. In der vorliegenden Arbeit gehen wir stets von einem stückweise analytischen Rand aus. Dabei müssen wir reguläre und singuläre Randpunkte unterscheiden. Während das Verhalten einer Riemann-Abbildung an regulären Randpunkten mit Hilfe des Schwarzschen Spiegelungsprinzips an analytischen Kurvenbögen einfach zu bestimmen ist, gestaltet sich dies in der Situation von singulären Randpunkten sehr viel komplizierter. In diesem Fall müssen wir erneut eine Unterscheidung treffen, nämlich ob es sich um eine analytische Ecke oder eine analytische Spitze handelt. Der erste Teil der Dissertation beschäftigt sich mit dem asymptotischen Verhalten einer Riemann-Abbildung an analytischen Ecken. Eine solche liegt vor, falls der Öffnungswinkel zwischen den analytischen Kurvenstücken an dem singulären Randpunkt größer als 0 ist. Es werden die Ergebnisse von Lichtenstein und Warschawski präsentiert, die sich mit dem Verhalten der Riemann-Abbildung und deren Ableitungen beschäftigt haben, sowie das stärkere Ergebnis von Lehman welches besagt, dass die Abbildung in eine verallgemeinerte Potenzreihe entwickelt werden kann. Unter Verwendung dieses Resultats konnte bereits der o-minimale Gehalt des Riemannschen Abbildungssatzes untersucht werden. Um ein ähnliches Ergebnis für den Fall, dass das Gebiet analytische Spitzen hat, zu erhalten, benötigen wir zunächst das asymptotische Verhalten der Riemann-Abbildung an einer Spitze. Darauf aufbauend kann die asymptotische Reihenentwicklung untersucht werden. Daher zielt der zweite Abschnitt dieser Arbeit darauf ab, das Verhalten der Abbildung an einer Spitze zu bestimmen. Dabei sprechen wir von einer analytischen Spitze, wenn der Rand des Gebietes lokal durch zwei reguläre analytische Bögen gegeben ist, deren Öffnungswinkel verschwindet. Neben dem asymptotischen Verhalten der Abbildung wird auch das Verhalten der Ableitungen, ihrer Umkehrfunktion und deren Ableitungen untersucht. Abschließend präsentieren wir eine Vermutung über die asymptotische Reihenentwicklung der Abbildung an einer analytischen Spitze. KW - Riemann mapping theorem KW - analytic cusp KW - asymptotic behaviour KW - boundary behaviour KW - Geometrische Funktionentheorie KW - Randverhalten KW - Riemannscher Abbildungssatz Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3587 ER - TY - INPR A1 - Zwicklbauer, Stefan A1 - Seifert, Christin A1 - Granitzer, Michael T1 - DoSeR - A Knowledge-Base-Agnostic Framework for Entity Disambiguation Using Semantic Embeddings N2 - Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. In this work, we propose DoSeR (Disambiguation of Semantic Resources), a (named) entity disambiguation framework that is knowledge-base-agnostic in terms of RDF (e.g. DBpedia) and entity-annotated document knowledge bases (e.g. Wikipedia). Initially, our framework automatically generates semantic entity embeddings given one or multiple knowledge bases. In the following, DoSeR accepts documents with a given set of surface forms as input and collectively links them to an entity in a knowledge base with a graph-based approach. We evaluate DoSeR on seven different data sets against publicly available, state-of-the-art (named) entity disambiguation frameworks. Our approach outperforms the state-of-the-art approaches that make use of RDF knowledge bases and/or entity-annotated document knowledge bases by up to 10% F1 measure. KW - Linked Data KW - Entity Disambiguation KW - Neural Networks KW - Semantic Web Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3670 ER - TY - THES A1 - Pöhls, Henrich C. T1 - Increasing the Legal Probative Value of Cryptographically Private Malleable Signatures N2 - Die Arbeit befasst sich mit der Erarbeitung von technischen Vorgaben und deren Umsetzung in kryptographisch sichere Verfahren von datenschutzfreundlichen, veränderbaren digitalen Signaturverfahren (private malleable signature schemes oder MSS) zur Erlangung möglichst hoher rechtlicher Evidenz. Im Recht werden bestimmte kryptographische Algorithmen, Schlüssellängen und deren korrekte organisatorische Anwendungen zur Erzeugung elektronisch signierter Dokumente als rechtssicher eingestuft. Dies kann zu einer Beweiserleichterung mithilfe signierter Dokumente führen. So gelten nach Verordnung (EU) Nr. 910/2014 (eIDAS) qualifiziert signierte elektronische Dokumente entweder als Anscheinsbeweis der Echtheit oder ihnen wird gar eine gesetzliche Vermutung der Echtheit zuteil. Gesetzlich anerkannte technische Verfahren, die einen solch erhöhten Beweiswert erreichen, erfüllen mithilfe von Kryptographie im wesentlichen zwei Eigenschaften: Integritätsschutz (integrity), also die Erkennung der Abwesenheit von unerwünschten Änderungen und die Zurechenbarkeit des unveränderten Dokumentes zum Signaturersteller (accountability). Hingegen ist der größte Vorteil veränderbarer digitaler Signaturverfahren (MSS) die „privacy“ genannte Eigenschaft: Eine autorisierte Änderung verbirgt den vorherigen Inhalt. Des Weiteren bleibt die Signatur solange valide wie ausschliesslich autorisierte Änderungen vorgenommen werden. Wird diese Eigenschaft kryptographisch nachweislich sicher erfüllt, so spricht man von einem private malleable signature scheme. In der Arbeit werden zwei verbreitete Formen, die sogenannten redactable signature schemes (RSS) und die sanitizable signature schemes (SSS), eingehend betrachtet. Diese erlauben vielfältige Einsatzmöglichkeiten, zum Beispiel eine autorisierte spätere Veränderung zur Wahrung von Geschäftsgeheimnissen oder zum Datenschutz: Der Unterzeichner delegiert so beispielsweise über ein private redactable signature scheme nur das nachträgliche Schwärzen (redaction). Dies schränkt die Veränderbarkeit auf das Entfernen von Informationen ein, erlaubt aber wirksam die Wahrung des Datenschutzes oder den Schutz von (Geschäfts)geheimnissen indem diese Informationen irreversibel für Angreifer entfernt werden. Die kryptographische privacy Eigenschaft besagt, dass es nun nicht mehr effizient möglich ist, aus dem geschwärzten Dokument Wissen über die geschwärzten Informationen zu erlangen, auch und gerade nicht für den Signaturprüfer. Die Arbeit geht im Kern der Frage nach, ob ein MSS sowohl die kryptographische Eigenschaft „privacy“ als auch gleichzeitig die Eigenschaften „integrity“ und „accountability“ mit ausreichend hohen Sicherheitsniveaus erfüllen kann. Das Ziel ist es, dass ein MSS gleichzeitig ein solch ausreichend hohen Grad an Sicherheit erfüllt, dass (1) die autorisierten nachträglichen Änderungen zum Schutze von Geschäftsgeheimnissen oder personenbezogenen Daten eingesetzt werden können, und dass (2) dem Dokument, welches mit dem speziellen Signaturverfahren signiert wurde, ein erhöhter Beweiswert beigemessen werden kann. In Bezug auf letzteres stellt die Arbeit sowohl die technischen Vorgaben, welche für qualifizierte elektronische Signaturen (nach Verordnung (EU) Nr. 910/2014) gelten, in Bezug auf die nachträgliche Änderbarkeit dar, als auch konkrete kryptographische Eigenschaften und Verfahren um diese Vorgaben kryptographisch beweisbar zu erreichen. Insbesondere weisen veränderbare Signaturen (MSS) einen anderen Integritätsschutz als traditionelle digitale Signaturen auf: Eine signierte Nachricht darf nachträglich durch eine definierte dritte Partei in einer definierten Art modifiziert werden. Diese sogenannte autorisierte Änderung (authorized modification) kann auch ohne Kenntnis des geheimen Signaturschlüssels des Unterzeichners durchgeführt werden. Bei der Verifikation der digitalen Signatur durch den Signaturprüfer bleibt der ursprüngliche Signierende und dessen Einwilligung zur autorisierten Änderung kryptographisch verifizierbar, auch wenn autorisierte Änderungen vorgenommen wurden. Die Arbeit umfasst folgende Bereiche: 1. Analyse der Rechtsvorgaben zur Ermittlung der rechtlich relevanten technischen Anforderungen hinsichtlich des geforderten Integritätsschutzes (integrity protection) und hinsichtlich des Schutzes von personenbezogenen Daten und (Geschäfts)geheimnissen (privacy protection), 2. Definition eines geeigneten Integritäts-Begriffes zur Beschreibung der Schutzfunktion von existierenden malleable signatures und bereits rechtlich anerkannten Signaturverfahren, 3. Harmonisierung und Analyse der kryptographischen Eigenschaften existierender malleable signature Verfahren in Hinblick auf die rechtlichen Anforderungen, 4. Entwicklung neuer und beweisbar sicherer kryptographischer Verfahren, 5. abschließende Bewertung des rechtlichen Beweiswertes (probative value) und des Datenschutzniveaus anhand der technischen Umsetzung der rechtlichen Anforderungen. Die Arbeit kommt zu dem Ergebnis, dass zunächst einmal jedwede (autorisierte wie auch unautorisierte) Änderung von einem kryptographisch sicheren malleable signature Verfahren (MSS) ebenfalls erkannt werden muss um Konformität mit Verordnung (EU) Nr. 910/2014 (eIDAS) zu erlangen. Eine solche Änderungserkennung durch die der Signaturprüfer, ohne Zuhilfe weiterer Parteien oder Geheimnisse, die Abwesenheit von autorisierten und unautorisierten Änderungen erkennt wurde im Rahmen der Arbeit entwickelt (non-interactive public accountability (PUB)). Diese neue kryptographische Eigenschaft wurde veröffentlicht und bereits von Arbeiten Anderer aufgegriffen. Des Weiteren werden neue kryptographische Eigenschaften und redactable signature und sanitizable signature Verfahren vorgestellt, welche zusätzlich zu dieser Änderungerkennung einen starken Schutz gegen die Aufdeckung des Orginals ermöglichen. Werden geeignete Eigenschaften erfüllt so wird für bestimmte Fälle ein technisches Schutzniveau erzielt, welches mit klassischen Signaturen vergleichbar ist. Damit lässt sich die Kernfrage positiv beantworten: Private MSS können ein Integritätsschutzniveau erreichen, welches dem rechtlich anerkannter digitaler Signaturen technisch entspricht, aber dennoch nachträgliche Änderungen autorisieren kann, welche einen starken Schutz gegen die Wiederherstellung des Orginals ermöglichen. N2 - This thesis distills technical requirements for an increased probative value and data protection compliance, and maps them onto cryptographic properties for which it constructs provably secure and especially private malleable signature schemes (MSS). MSS are specialised digital signature schemes that allow the signatory to authorize certain subsequent modifications, which will not negatively affect the signature verification result. Legally, regulations such as European Regulation 910/2014 (eIDAS), ‘follow-up’ to longstanding Directive 1999/93/EC, describe the requirements in technology-neutral language. eIDAS states that, when a digital signature meets the full requirements it becomes a qualified electronic signature and then it “[...] shall have the equivalent legal effect of a handwritten signature [...]” [Art. 25 Regulation 910/2014]. The question of what legal effect this has with regards to the probative value that is assigned is actually not determined in EU Regulation 910/2014 but in European member state law. This thesis concentrates in its analysis on the — in this respect detailed — German Code of Civil Procedure (ZPO). Following the ZPO, a signature awards the signed document with at least a high probative value of prima facie evidence. For signed documents of official authority the ZPO’s statutory rules even award evidence with a legal presumption of authenticity. This increased probative value is also awarded to electronic documents bearing electronic signatures when those conform to the eIDAS requirements. The requirements centre around the technical security goals of integrity and accountability. Technical mechanisms use cryptographic means to detect the absence of unauthorized modifications (integrity) and allow to authenticate the signed document’s signatory (accountability). However, the specialised malleable signature schemes’ main advantage is a cryptographic property termed privacy: An authorized subsequent modification will protect the confidentiality of the modified original. Moreover, the MSS will retain a verifiable signature if only authorized modifications were carried out. If these properties are reached with provable security the schemes are called private malleable signature schemes. This thesis analyses two forms of MSS discussed in existing literature: Redactable signature schemes (RSS) which allow subsequent deletions, and sanitizable signature schemes (SSS) which allow subsequent edits. These two forms have many application scenarios: A signatory can delegate that a later redaction might take place while retaining the integrity and authenticity protection for the still remaining parts. The verification of a signature on a redacted or sanitized document still enables the verifying entity to corroborate the signatory’s identity with the help of flanking technical and organisational mechanisms, e.g. a trusted public key infrastructure. The valid signature further corroborates the absence of unauthorized changes, because the MSS is still cryptographically protecting the signed document from undetected unauthorized changes inflicted by adversaries. Due to the confidentiality protection for the overwritten parts of the document following from cryptographic privacy the sanitization and redaction can be used to safeguard personal data to comply with data protection regulation or withhold trade-secrets. The research question is: Can a malleable signature scheme be private to be compliant with EU data protection regulation and at the same time fulfil the integrity protection legally required in the EU to achieve a high probative value for the data signed? Answering this requires to understand the protection requirements in respect to accountability and integrity rooted in Regulation 910/2014 and related legal texts. This thesis has analysed the previous Directive 1999/93/EC as well as German SigG and SigVO or UK and US laws. Besides that, legal texts, laws and regulations for the protection requirements of personal data (or PII) have been analysed to distill the confidentiality requirements, e.g. the German BDSG or the EU Regulation 2016/679 (GDPR). Moreover, an answer to the research question entails understanding the relevant difference between regular digital signature schemes, like RSASSA-PSS from PKCS-v2.2 [422], which are legally accepted mechanisms for generating qualified electronic signatures and MSS for which the legal status was completely unknown before the thesis. Especially as MSS allow the authorized entity to adapt the signature, such that it is valid after the authorized modification, without the knowledge or use of the signatory’s signature generation key. On verification of an MSS the verifying entity still sees a valid signature technically appointing the legal signatory as the origin of a document, which might — however — have undergone authorized modifications after the signature was applied. The thesis documents the results achieved in several domains: 1. Analysis of legal requirements towards integrity protection for an increased probative value and towards the confidentiality protection for use as a privacy-enhancing-technique to comply with data protection regulation. 2. Definition of a suitable terminology for integrity protection to capture (a) the differences between classical and malleable signature schemes, (b) the subtleties among existing MSS, as well as (c) the legal requirements. 3. Harmonisation of existing MSS and their cryptographic properties and the analysis of their shortcomings with respect to the legal requirements. 4. Design of new cryptographic properties and their provably secure cryptographic instantiations, i.e., the thesis proposes nine new cryptographic constructions accompanied by rigorous proofs of their security with respect to the formally defined cryptographic properties. 5. Final evaluation of the increased probative value and data-protection level achievable through the eight proposed cryptographic malleable signature schemes. The thesis concludes that the detection of any subsequent modification (authorized and unauthorized) is of paramount legal importance in order to meet EU Regulation 910/2014. Further, this thesis formally defined a public form of the legally requested integrity verification which allows the verifying entity to corroborate the absence of any unauthorized modifications with a valid signature verification while simultaneously detecting the presence of an authorized modification — if at least one such authorized modification has subsequently occurred. This property, called non-interactive public accountability (PUB), has been formally defined in this thesis, was published and has already been adopted by the academic community. It was carefully conceived to not negatively impact a base-line level of privacy protection, as non-interactive public accountability had to destroy an existing strong privacy notion of transparency, which was identified as a hinderance to legal equivalence arguments. With RSS and SSS constructions that meet these properties, the thesis can give a positive answer to the research question: Private MSS can reach a level of integrity protection and guarantee a level of accountability comparable to that of technical mechanisms that are legally accepted to generate qualified electronic signatures giving an increased probative value to the signed document, while at the same time protect the overwritten contents’ confidentiality. KW - Integrity KW - Privacy KW - Redactable Signature Scheme (RSS) KW - Sanitizable Signature Scheme (RSS) KW - eIDAS KW - Integrität KW - Elektronische Unterschrift KW - Beweiswürdigung KW - Datenschutz KW - Vertraulichkeit Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5823 ER - TY - THES A1 - Wölfl, Andreas T1 - Data Management in Certified Avionics Systems N2 - Data management is a cornerstone for any kind of information system - including the aerospace and aviation sector. In contrast to conventional domains, software development in the avionics domain must adhere to a legally binding certification process, called qualification. The success of the process depends on compliance with international standards, such as DO-178: Software Considerations in Airborne Systems and Equipment Certification. From a software developer's perspective, challenges arise in terms of methods and tools. Techniques that have a potential impact on the deterministic and predictable execution of avionics software are prohibited. The objective of this thesis' research is to develop a scalable method to realize data-management for multi-variant avionics software under the restrictions and constraints of the domain. Since avionics software faces very long-term life-cycles (up to 75 years), a particular focus is being placed on maintenance and evolution. Based on the insights gained in a semi-structured interview at Airbus Helicopters, industrial established approaches to implement qualified avionics software are assessed at first and compared with respect to strengths and weaknesses for data-management afterwards. As a result, a novel development approach is proposed, combining model-based techniques and product-line technology to derive the source code of highly specific data-management variants, as well as the majority of assets required for the qualification process, from a declarative system specification. In order to demonstrate the practicability of the approach in industry, a framework is presented that is deployed and applied at Airbus Helicopters to generate qualifiable data-management components for the variants of the NH90 helicopter. The maintainability is shown by means of a domain-specific optimization, in which the model-based and generative approach is used to establish safe memory overlays at compile-time. Key findings reveal a substantially reduced memory footprint (29,1% in case of a real-world scenario), as well as an significantly facilitated implementation process, which would not be accomplishable using conventional methods for software development in the avionics domain. N2 - Datenverwaltung ist der Grundstein für jegliche Art von Informationssystemen – so auch in der Luft- und Raumfahrt. Im Unterschied zu konventionellen Domänen unterliegt die Software für Avionik-Systeme einem gesetzlich vorgeschriebenen Zertifizierungsprozess, genannt Qualifizierung. Der Erfolg dieses Vorgangs hängt primär von der Einhaltung internationaler Sicherheitsnormen ab. Aus der Sicht eines Software-Entwicklers ergeben sich hier Herausforderungen hinsichtlich erlaubter Methoden und Werkzeuge, denn Techniken, die den deterministischen Ablauf eines Programms gefährden können, sind verboten. Das Ziel dieser Arbeit ist es, ein skalierbares Verfahren zu entwickeln, um eine Datenmanagement-Komponente unter Einhaltung der Sicherheitsnormen für Avionik-Systeme in mehreren Varianten nicht nur zu realisieren, sondern auch langfristig warten zu können (bis zu 75 Jahre). Basierend auf den Erkenntnissen, die durch ein semi-strukturiertes Entwickler-Interview bei Airbus Helicopters gewonnen wurden, werden industriell etablierte Methoden zur Implementierung und Qualifizierung von Avionik-Software zunächst analysiert und anschließend hinsichtlich ihrer Stärken und Schwächen für Datenmanagement bewertet. Als Resultat wird ein neuartiger Ansatz vorgestellt, der durch eine Kombination aus modellbasierten Methoden und Produktlinien-Technologie, sowohl den Quellcode einer spezifischen Datenmanagement-Variante, als auch weitere Erzeugnisse, die für die Qualifizierung zu erbringen sind, aus einer deklarativen Systemspezifikation ableitet. Als Beispiel für die Praktikabilität des Verfahrens wird die Architektur einer Werkzeugkette vorgestellt, die bei Airbus eingesetzt wird, um qualifizierbare Datenmanagement-Varianten für die eingebettete Software des NH90 Helikopters zu generieren. Die Wartbarkeit wird durch eine domänenspezifische Optimierung demonstriert, die durch den modellbasierten und generativen Ansatz eine sichere Überlappung von Speicherbereichen zur Compile-Zeit ermöglicht. Zu den Ergebnissen zählen nicht nur verringerter Speicherverbrauch (29,1% in einem realen Szenario), sondern auch eine effiziente Umsetzung, die mit den etablierten Methoden zur Software-Entwicklung in der Avionik-Domäne nicht zu erreichen wäre. KW - Avionik KW - Datenverarbeitung Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5758 ER - TY - THES A1 - Stenzer, Alexander T1 - Ein Ansatz zur semantik-basierten Anfragerelaxation für hierarchische Strukturen T1 - An approach to semantic-driven query relaxation for hierachical structures N2 - Für Monumentalbauten als Teil unseres Kulturgutes im Speziellen als auch für Gebäude im Allgemeinen, wurden im Rahmen des MonArch- rojektes verschiedene Methoden zur digitalen Speicherung von Informationen über Monumentalbauten erforscht. Das daraus entstandene MonArch-System ist für die Dokumentation von Monumentalbauten verwendbar und speichert das digitale Modell des Bauwerks in einer relationalen Datenbank. Das digitale Modell des Bauwerks entsteht durch eine Segmentierung in Gebäudeteile, die dann in einer Strukturhierarchie zusammengefasst werden können. Als Strukturhierarchie versteht man in diesem Zusammenhang eine Hierarchie von Gebäudeteilen, die in einer Teil-von-Beziehung stehen. Die Strukturhierarchie erlaubt es Informationen z.B. Dokumente mit einem räumlichen Bezug auszuzeichnen. Zusätzlich wird eine Themenhierarchie unterstützt, die es erlaubt Informationen thematisch mit Begriffen zu beschreiben. Betrachtet man räumliche und thematische Anfragen in vernetzten MonArch-Systemen, in denen sich mehrere Gebäudearchive zusammenschließen, ist diese starke Bindung der Information an die einzigartige Struktur jedes Gebäudes ein Hindernis für ein einfaches Verfahren zur räumlichen Suche. Da sich jedes Gebäude in seinem speziellen strukturellen und räumlichen Aufbau unterscheidet, liefert eine räumliche Anfrage, die speziell auf diese Eigenheiten eines Gebäudes ausgerichtet ist, für andere Gebäude keine Suchergebnisse. Für thematische Anfragen stellen nicht kompatible Themenhierarchien ein Hindernis dar, die eine übergreifende thematische Anfrage verhindern. Die größte Herausforderung ist es, Struktur- und Themenhierarchien aufeinander abzubilden. Zur Lösung des geschilderten Problems wird in vernetzten Informationssystemen auf eine geeignete Transformation der ursprünglichen Anfrage zurückgegriffen, um den Anfragefokus zu erweitern (Relaxation) oder eine Anpassung an die Gegebenheiten des entfernten Informationssystems zu erreichen (Transformation). Das Anfragetransformations- und -relaxationsverfahren, das in dieser Arbeit vorgestellt wird, nutzt eine Generalisierungsbeziehung aus, um ausgehend von einer Anfrage an eine spezielle Struktur- und Themenhierarchie eine automatische Transformation der Anfrage durchzuführen. Bei Themenhierarchien sind gemeinsame Oberthemen ein Ansatzpunkt. Bei Strukturhierarchien können Typinformationen zu Gebäudeteilen die Generalisierungsbeziehung darstellen. Die transformierte und dadurch relaxierte Anfrage kann dann an ein Netzwerk von MonArch-Systemen gestellt werden, ohne dass eine manuelle Auswahl der Gebäudeteile in anderen Strukturhierarchien oder eine angepasste Themenauswahl erfolgen muss. Dazu muss die Strukturhierarchie der anderen Gebäude im Netzwerk von MonArch-Systemen nicht bekannt sein. Im Rahmen der vorliegenden Arbeit werden verschiedene Relaxationsverfahren, z.B. ein angepasstes Spreading-Activation-Verfahren, zur automatischen Anfragetransformation von räumlichen und thematischen Anfragen vorgestellt, mit dem Ziel eine vollständige Abbildung zwischen den Strukturhierarchien von Gebäuden und Themenhierarchien zu vermeiden. Erreicht wird das Ziel durch eine Erweiterung des MonArch-Datenmodells und eine Verallgemeinerung der MonArch-Anfragen, die eine Anfragetransformation zum Anfragezeitpunkt erlauben. N2 - For historic buildings as part of our cultural heritage in particular as well as for buildings in general, the MonArch project developed different methods for the digital storage of information about historic buildings. The resulting MonArch system is used for documenting historic buildings and storing thier digital model of a building in a relational database. The digital model of the building is created by segmenting the bulding into several parts, which can be summarized in a structure hierarchy. In this context a structure hierarchy is a part-of hierachy of building parts. The structure hierarchy allows to tag information e.g. documents with a spatial reference. In addition, a topic hierarchy allows to describe information by topics. Looking at spatial and topical queries in networked MonArch systems, in which several building archives are connected with each other, information rigidly fixed to the unique structure of each building is an obstacle for a simple method for spatial query. As each building has its individual structural and spatial composition, a spatial query that is specific to a certain structure hierarchy does not provide any results for other buildings. For topical queries, incompatible topic hierarchies are an obstacle which prevent a general query. The main challenge is to provide an alignment of structural- and thematic hierarchies. In order to solve the problem described above, in a networked information system, the solution is to transform the original query by a query relaxation to extend the focus of the query or to adapt the query to the conditions of other information system. The query transformation and -relaxation method presented in this thesis uses a generalization to automatically transform a query to a specific structure and topic hierarchy. In the case of topical hierarchies common topics are a starting point. For structural hierarchies, type infromation of building parts can be used as generalization relationship. The transformed and thereby relaxed query can then be sent to a network of MonArch systems without a manual selection of the building parts in other structural hierarchies or an appropriate selection of topics. The structural hierarchies of other buildings in the network of MonArch systems can been unknown. In this thesis different relaxation methods, e.g. an adapted spreading activation method, for automatic query transformation of spatial and topical queries is presented with the aim to avoid a complete alignment between the structural hierarchies of buildings and topic hierarchies. The objective is achieved by the extension of the MonArch data model and queries, which allow a query transformation at runtime. KW - Anfragerelaxation KW - hierarchische Strukturen KW - räumlich KW - thematisch KW - Spreading-Activation KW - Abfragesprache KW - Datenbanksystem KW - Datenbanksprache Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5746 ER - TY - THES A1 - Lopera Gonzalez, Luis Ignacio T1 - Mining Functional and Structural Relationships of Context Variables in Smart-Buildings N2 - The Internet of Things (IoT) is a network of computational services, devices, and people, which share information with each other. In IoT, inter-system communication is possible and human interaction is not required. IoT devices are penetrating the home and office building environments. According to current estimates, about 35 billion IoT devices will be connected by the year 20212. In the IoT business model, value comes from integrating devices into applications, e.g., home and office automation. In general, an IoT application associates different information sources with actions which can modify the environment, e.g., change the room’s temperature, inform a person, e.g., send an e-mail, or activate other services, e.g., buy milk on-line. In this thesis, we focus on the commissioning and verification processes of IoT devices used in building automation applications. Within a building’s lifespan, new devices are added, interior spaces are refurbished, and faulty devices are replaced. All of these changes are currently made manually. Furthermore, consider that a context-aware Building Management System (BMS) is an IoT application, which measures direct-context from the building’s sensors to characterize environmental conditions, user locations, and state. Additionally, a BMS combines sensor information to derive inferred-context, such as user activity. Similar to IoT devices, inferred-context instances have to be created manually. As the number of devices and inferred-context instances increases, keeping track of all associations becomes a time-consuming and error-prone task. The hypothesis of the thesis is that users who interact with the building create use-patterns in the data, which describe functional relations between devices and inferred-context instances, e.g., which desk-movement sensor is used to infer desk-presence and controls which overhead light; additionally, use-patterns can also provide structural relations, e.g., the relative position of spatial sensors. To test the hypothesis, this thesis presents an extension to the new IoT class rule programming paradigm, which simplifies rule creation based on classes. The proposed extension uses a semantic compiler to simplify the device and inferred-context associations. Using direct-context information and template classes, the compiler creates all possible inferredcontext instances. Buildings using context-aware BMSs will have a dynamic response to user behaviour, e.g., required illumination for computer-work is provided by adjusting blinds or increasing the dim setting of overhead ceiling lamps. We propose a rule mining framework to extract use-patterns and find the functional and structural relationships between devices. The rule mining framework uses three stages: (1) event extraction, (2) rule mining, (3) structure creation. The event extraction combines the building’s data into a time-series of device events. Then, in the rule mining stage, rules are mined from the time series, where we use the established algorithm temporal interval tree association rule learner. Additionally, we proposed a rule extraction algorithm for spatial sensor’s data. The algorithm is based on statistical analysis of user transition times between adjacent sensors. We also introduce a new rule extraction algorithm based on increasing belief. In the last stage, structure creation uses the extracted rules to produce device association groups, hierarchical representation of the building, or the relative location of spatial sensors. The proposed algorithms were tested using a year-long installation in a living-lab consisting of a four-person office, a 12-person open office, and a meeting room. For the spatial sensors, four locations within public buildings were used: a meeting room, a hallway, T-crossing, and a foyer. The recording times range from two weeks to two months depending on scenario complexity. We found that user-generated patterns appear in building data. The rule mining framework produced structures that represent functional and spatial relationships of building’s devices and provide sufficient information to automate maintenance tasks, e.g., automatic device naming. Furthermore, we found that environmental changes are also a source of device data patterns, which provide additional associations. For example, using the framework we found the façade group for exterior light sensors. The façade group can be used to automatically find an alternative signal source to replace broken outdoor light sensors. Finally, the rule mining framework successfully retrieved the relative location of spatial sensors in all locations but the foyer. KW - Internet of Things KW - Rule Mining KW - Functional relationships KW - Structural relationships KW - Smart buildings KW - Internet der Dinge KW - Gebäudeleittechnik KW - Datamining Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5737 ER - TY - JOUR A1 - Kronawitter, Stefan A1 - Lengauer, Christian T1 - Polyhedral Search Space Exploration in the ExaStencils Code Generator JF - ACM Transactions on Architecture and Code Optimization N2 - Performance optimization of stencil codes requires data locality improvements. The polyhedron model for loop transformation is well suited for such optimizations with established techniques, such as the PLuTo algorithm and diamond tiling. However, in the domain of our project ExaStencils, stencil codes, it fails to yield optimal results. As an alternative, we propose a new, optimized, multi-dimensional polyhedral search space exploration and demonstrate its effectiveness: we obtain better results than existing approaches in several cases. We also propose how to specialize the search for the domain of stencil codes, which dramatically reduces the exploration effort without significantly impairing performance. KW - Software performance KW - Source code generation KW - Discrete space search Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5778 SN - 1544-3973 VL - 15 IS - 4 ER - TY - CHAP A1 - Parra Rodriguez, Juan D. A1 - Posegga, Joachim T1 - RAPID: Resource and API-Based Detection Against In-Browser Miners T2 - Proceedings of the 34th Annual Computer Security Applications Conference N2 - Direct access to the system's resources such as the GPU, persistent storage and networking has enabled in-browser crypto-mining. Thus, there has been a massive response by rogue actors who abuse browsers for mining without the user's consent. This trend has grown steadily for the last months until this practice, i.e., CryptoJacking, has been acknowledged as the number one security threat by several antivirus companies. Considering this, and the fact that these attacks do not behave as JavaScript malware or other Web attacks, we propose and evaluate several approaches to detect in-browser mining. To this end, we collect information from the top 330.500 Alexa sites. Mainly, we used real-life browsers to visit sites while monitoring resource-related API calls and the browser's resource consumption, e.g., CPU. Our detection mechanisms are based on dynamic monitoring, so they are resistant to JavaScript obfuscation. Furthermore, our detection techniques can generalize well and classify previously unseen samples with up to 99.99\% precision and recall for the benign class and up to 96\% precision and recall for the mining class. These results demonstrate the applicability of detection mechanisms as a server-side approach, e.g., to support the enhancement of existing blacklists. Last but not least, we evaluated the feasibility of deploying prototypical implementations of some detection mechanisms directly on the browser. Specifically, we measured the impact of in-browser API monitoring on page-loading time and performed micro-benchmarks for the execution of some classifiers directly within the browser. In this regard, we ascertain that, even though there are engineering challenges to overcome, it is feasible and beneficial for users to bring the mining detection to the browser. KW - Web Security KW - WebRTC KW - postMessage KW - Browser Security KW - Content Security Policy Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6550 SN - 978-1-4503-6569-7 PB - ACM CY - New York, NY, USA ER - TY - CHAP A1 - Parra Rodriguez, Juan D. A1 - Schreckling, Daniel A1 - Posegga, Joachim T1 - Addressing Data-Centric Security Requirements for IoT-Based Systems T2 - 2016 International Workshop on Secure Internet of Things (SIoT) N2 - Allowing users to control access to their data is paramount for the success of the Internet of Things; therefore, it is imperative to ensure it, even when data has left the users' control, e.g. shared with cloud infrastructure. Consequently, we propose several state of the art mechanisms from the security and privacy research fields to cope with this requirement. To illustrate how each mechanism can be applied, we derive a data-centric architecture providing access control and privacy guaranties for the users of IoT-based applications. Moreover, we discuss the limitations and challenges related to applying the selected mechanisms to ensure access control remotely. Also, we validate our architecture by showing how it empowers users to control access to their health data in a quantified self use case. KW - Internet of Things KW - Security Architecture KW - Data-Centric Security KW - Differential Privacy KW - Secure Cloud Storage Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6546 SN - 978-1-5090-5091-8 PB - IEEE Xplore CY - Heraklion, Greece ER - TY - CHAP A1 - Parra Rodriguez, Juan D. A1 - Posegga, Joachim T1 - Local Storage on Steroids: Abusing Web Browsers for Hidden Content Storage and Distribution T2 - International Conference on Security and Privacy in Communication Systems N2 - Analysing security assumptions taken for the WebRTC and postMessage APIs led us to find a novel attack abusing the browsers' persistent storage capabilities. The presented attack can be executed without the website's visitor knowledge, and it requires neither browser vulnerabilities nor additional software on the browser's side. To exemplify this, we study how can an attacker use browsers to create a network for persistent storage and distribution of arbitrary data. In our proof of concept, the total storage of the network, and therefore the space used within each browser, grows linearly with the number of origins delivering the malicious JavaScript code. Further, data transfers between browsers are not restricted by the Same Origin Policy, which allows for a unified cross-origin browser network, regardless of the origin from which the script executing the functionality is loaded from. In the course of our work, we assess the feasibility of a real-life deployment of the network by running experiments using Linux containers and browser automation tools. Moreover, we show how security mechanisms against third-party tracking, cross-site scripting and click-jacking can diminish the attack's impact, or even prevent it. KW - Web Security KW - WebRTC KW - postMessage KW - Browser Security Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6572 SN - 978-3-030-01704-0 PB - Springer CY - Cham ER - TY - CHAP A1 - Parra Rodriguez, Juan D. A1 - Posegga, Joachim T1 - CSP & Co. Can Save Us from a Rogue Cross-Origin Storage Browser Network! But for How Long? T2 - Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy N2 - We introduce a new browser abuse scenario where an attacker uses local storage capabilities without the website's visitor knowledge to create a network of browsers for persistent storage and distribution of arbitrary data. We describe how security-aware users can use mechanisms such as the Content Security Policy (CSP), sandboxing, and third-party tracking protection, i.e., CSP & Company, to limit the network's effectiveness. From another point of view, we also show that the upcoming Suborigin standard can inadvertently thwart existing countermeasures, if it is adopted. KW - Web Security KW - WebRTC KW - PostMessage KW - Browser Security KW - Parasitic Computing Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6561 SN - 978-1-4503-5632-9 PB - ACM CY - New York, NY, USA ER - TY - THES A1 - Ganser, Stefan T1 - Iterative Schedule Optimization for Parallelization in the Polyhedron Model N2 - In high-performance computing, one primary objective is to exploit the performance that the given target hardware can deliver to the fullest. Compilers that have the ability to automatically optimize programs for a specific target hardware can be highly useful in this context. Iterative (or search-based) compilation requires little or no prior knowledge and can adapt more easily to concrete programs and target hardware than static cost models and heuristics. Thereby, iterative compilation helps in situations in which static heuristics do not reflect the combination of input program and target hardware well. Moreover, iterative compilation may enable the derivation of more accurate cost models and heuristics for optimizing compilers. In this context, the polyhedron model is of help as it provides not only a mathematical representation of programs but, more importantly, a uniform representation of complex sequences of program transformations by schedule functions. The latter facilitates the systematic exploration of the set of legal transformations of a given program. Early approaches to purely iterative schedule optimization in the polyhedron model do not limit their search to schedules that preserve program semantics and, thereby, suffer from the need to explore numbers of illegal schedules. More recent research ensures the legality of program transformations but presumes a sequential rather than a parallel execution of the transformed program. Other approaches do not perform a purely iterative optimization. We propose an approach to iterative schedule optimization for parallelization and tiling in the polyhedron model. Our approach targets loop programs that profit from data locality optimization and coarse-grained loop parallelization. The schedule search space can be explored either randomly or by means of a genetic algorithm. To determine a schedule's profitability, we rely primarily on measuring the transformed code's execution time. While benchmarking is accurate, it increases the time and resource consumption of program optimization tremendously and can even make it impractical. We address this limitation by proposing to learn surrogate models from schedules generated and evaluated in previous runs of the iterative optimization and to replace benchmarking by performance prediction to the extent possible. Our evaluation on the PolyBench 4.1 benchmark set reveals that, in a given setting, iterative schedule optimization yields significantly higher speedups in the execution of the program to be optimized. Surrogate performance models learned from training data that was generated during previous iterative optimizations can reduce the benchmarking effort without strongly impairing the optimization result. A prerequisite for this approach is a sufficient similarity between the training programs and the program to be optimized. KW - Parallelrechner KW - Optimiererung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7936 ER - TY - THES A1 - Hatzesberger, Simon T1 - Strongly Asymptotically Optimal Methods for the Pathwise Global Approximation of Stochastic Differential Equations with Coefficients of Super-linear Growth N2 - Our subject of study is strong approximation of stochastic differential equations (SDEs) with respect to the supremum and the L_p error criteria, and we seek approximations that are strongly asymptotically optimal in specific classes of approximations. For the supremum error, we prove strong asymptotic optimality for specific tamed Euler schemes relating to certain adaptive and to equidistant time discretizations. For the L_p error, we prove strong asymptotic optimality for specific tamed Milstein schemes relating to certain adaptive and to equidistant time discretizations. To illustrate our findings, we numerically analyze the SDE associated with the Heston–3/2–model originating from mathematical finance. KW - Stochastic differential equation KW - Strong approximation KW - Strong asymptotic optimality KW - Asymptotic lower error bounds KW - Asymptotic upper error bounds KW - Stochastische Differentialgleichung KW - Approximation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8100 ER - TY - THES A1 - Stahlbauer, Andreas T1 - Abstract Transducers for Software Analysis and Verification N2 - Whenever software faults can endanger human life, property, or the environment, the absence of faults must be ensured with utmost care and the best technologies available. Evidence is needed showing that all requirements are satisfied and that the risk of faults is reduced. One technique to conduct such a verification task—composed of the software to verify, the specification to check, and a model of the environment—is software model checking. To conduct a verification task with a model checker, different models of the task are constructed. We distinguish between two types of task models: syntactic task models and semantic task models, which define the respective syntactic structure (control flow) and semantic structure (state transitions, invariants) of the verification task. When constructing such models, we can observe that similar structures and substructures reappear within and among different verification tasks. For example, the same assertions to check can appear in different functions, or the same predicate can be part of different invariants to describe sets of program states. Similarities that appear during the model construction process can be the result of solving similar reasoning problems, often solved using computationally expensive procedures (as typical for model checking), over and over again. Not reusing results of solving similar problems, not having a means for conducting repeated efforts automatically, or not trying to reduce the number of similar reasoning efforts, is a waste of precious resources. To address these problems, we present a common conceptual and technical foundation for sharing syntactic and semantic task artifacts for reuse, within and among verification runs. Both the syntactic construction of a verification task and the construction of its semantic model—which describes all possible behaviors and states—are covered. We study how commonalities and regularities in the task models can be taken into account to facilitate the process of sharing task artifacts for reuse, and to make the overall verification process more efficient and effective. We introduce abstract transducers as the theoretical foundation of this thesis: a type of finite-state transducers with an inherent notion of abstraction for states, the input alphabet, and its output alphabet. Abstracting these transducers allows us to widen both the set of input words for that they produce output and the sets of output words. Abstract transducers are instantiated as task artifact transducers to map from program structures to task artifacts to share. We show that the notion of abstraction provides a means for increasing the scope for that task artifacts are shared for reuse. We present two instances of task artifact transducers: Yarn transducers and precision transducers. We use Yarn transducers for providing code to weave into the control-flow structure of a computer program, and present the Loom analysis as a means for orchestrating the weaving process. Precision transducers provide a means for sharing abstraction precisions for reuse, thus aid in defining the level of abstraction of a semantic task model. For both types of transducers, we provide empirical evidence on their practical applicability, for example, to verify Linux kernel modules, and show that they can help in increasing the verification performance. KW - Program Analysis KW - Software Model Checking KW - Automata Theory KW - Transduktor KW - Formale Beschreibungstechnik KW - Modellgetriebene Entwicklung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8468 ER - TY - THES A1 - Planche, Benjamin T1 - Bridging the Realism Gap for CAD-Based Visual Recognition N2 - Computer vision aims at developing algorithms to extract high-level information from images and videos. In the industry, for instance, such algorithms are applied to guide manufacturing robots, to visually monitor plants, or to assist human operators in recognizing specific components. Recent progress in computer vision has been dominated by deep artificial neural network, i.e., machine learning methods simulating the way that information flows in our biological brains, and the way that our neural networks adapt and learn from experience. For these methods to learn how to accurately perform complex visual tasks, large amounts of annotated images are needed. Collecting and labeling such domain-relevant training datasets is, however, a tedious—sometimes impossible—task. Therefore, it has become common practice to leverage pre-available three-dimensional (3D) models instead, to generate synthetic images for the recognition algorithms to be trained on. However, methods optimized over synthetic data usually suffer a significant performance drop when applied to real target images. This is due to the realism gap, i.e., the discrepancies between synthetic and real images (in terms of noise, clutter, etc.). In my work, three main directions were explored to bridge this gap. First, an innovative end-to-end framework is proposed to render realistic depth images from 3D models, as a growing number of solutions (especially in the industry) are utilizing low-cost depth cameras (e.g., Microsoft Kinect and Intel RealSense) for recognition tasks. Based on a thorough study of these devices and the different types of noise impairing them, the proposed framework simulates their inner mechanisms, comprehensively modeling vital factors such as sensor noise, material reflectance, surface geometry, etc. Able to simulate a wide panel of depth sensors and to quickly generate large datasets, this framework is used to train algorithms for various recognition tasks, consistently and significantly enhancing their performance compared to other state-of-the-art simulation tools. In some cases, however, relevant 2D or 3D object representations to generate synthetic samples are not available. Considering this different case of data scarcity, a solution is then proposed to incrementally build a representation of visual scenes from partial observations. Provided observations are localized from one to another based on their content and registered in a global memory with spatial properties. Simultaneously, this memory can be queried to render novel views of the scene. Furthermore, unobserved regions can be hallucinated in memory, in consistence with previous observations, hallucinations, and global priors. The efficacy of the proposed mnemonic and generative system, trainable end-to-end, is demonstrated on various 2D and 3D use-cases. Finally, an advanced convolutional neural network pipeline is introduced, tackling the realism gap from a novel angle. While most methods addressing this problem focus on bringing synthetic samples—or the knowledge acquired from them—closer to the real target domain, the proposed solution performs the opposite process, mapping unseen target images into controlled synthetic domains. The pre-processed samples can then be handed to downstream recognition methods, themselves purely trained on similar synthetic data, to greatly improve their accuracy. For each approach, a variety of qualitative and quantitative studies are detailed, providing successful comparisons to state-of-the-art methods. By proposing solutions to bridge the realism gap from either side, as well as a pipeline to improve the acquisition and generation of new visual content, this thesis provides a unique perspective on the challenges of data scarcity when building robust recognition systems. N2 - Die Computer Vision strebt an, Algorithmen zum Extrahieren hochwertiger Informationen von Bildern und Videos zu entwickeln. In der Industrie werden solche Algorithmen beispielsweise angewendet, um Fertigungsroboter zu steuern, um Betriebe visuell zu überwachen, oder um Mitarbeiter bei der Erkennung bestimmter Komponenten zu unterstützen. Die kürzlichen Fortschritte im Bereich Computer Vision wurden von tiefen künstlichen neuronalen Netzen dominiert. Diese Methoden des maschinelles Lernens (Machine Learning) simulieren die Art und Weise, in der die Information in unseren biologischen Gehirnen verarbeitet wird und in der unsere neuronale Netze sich anpassen und aus Erfahrung lernen. Damit diese Methoden zur genauen Ausführung komplexer visueller Aufgaben befähigt werden, müssen sie mit einer großen Anzahl von annotierten Bildern trainiert werden. Die Erhebung und Kennzeichnung entsprechender Trainingsdatensätze ist jedoch eine langwierige und manchmal sogar unmögliche Aufgabe. Deswegen ist es zur gängigen Praxis geworden, stattdessen die vorhandenen 3D-Modelle zur Generierung synthetischer Bilder einzusetzen, damit die Erkennungsalgorithmen mit Hilfe dieser Bilder trainiert werden. Allerdings, bei der Anwendung auf die realen Zielbilder, erleiden die Methoden, die durch synthetische Daten angepasst wurden, einen erheblichen Leistungsabfall. Dies geschieht aufgrund der Realismuslücke (Realism Gap), das heißt durch die Diskrepanzen zwischen synthetischen und realen Bildern (hinsichtlich von Rauschen, Störungen usw.). In meiner Arbeit wurden drei Hauptrichtungen untersucht, um diese Lücke zu schließen. Zuerst wird ein innovatives End-to-End-Framework vorgeschlagen, um realistische Tiefenbilder von 3D-Modellen zu rendern, denn immer mehr Lösungen (insbesondere in der Industrie) verwenden kostengünstige Tiefen-Kameras (z. B. Microsoft Kinect und Intel RealSense) für die Erkennungsaufgaben. Aufgrund einer gründlichen Untersuchung dieser Geräte und der verschiedenen Arten von Rauschen, die dem Aufnahmen beeinträchtigen, simuliert das vorgeschlagene Framework deren innere Mechanismen, indem Schlüsselfaktoren wie Sensorrauschen, Reflektionsgrade der Materialien, Oberflächengeometrie usw. umfassend modelliert werden. Dieses Framework ist in der Lage eine breite Palette von Tiefensensoren zu simulieren und schnell große Datensätze zu generieren. Dies wird eingesetzt, um die Algorithmen für verschiedene Erkennungsaufgaben zu trainieren und deren Leistung im Vergleich zu anderen hochmodernen Simulationsmethoden konsistent und erheblich zu verbessern. In manchen Fällen sind jedoch keine relevanten 2D- oder 3D-Objektdarstellungen zur Erzeugung von synthetischen Bildern verfügbar. Ausgehend von dieser Problematik des Datenmangels wurde eine Lösung vorgeschlagen, in der die Rekonstruktion von visuellen Szenen aus Teilbeobachtungen schrittweise durchgeführt wird. Die Bilder werden anhand ihres Inhalts in Bezug zueinander lokalisiert und in einer globalen Gedächtnisstruktur mit räumlichen Eigenschaften registriert Gleichzeitig kann dieses Gedächtnis abgerufen werden, um neuen Ansichten der Szene zu rendern. Darüber hinaus können bisher unbeobachtete Regionen in Übereinstimmung mit früheren Beobachtungen, Halluzinationen und globalen Vorwissen im Gedächtnis halluziniert werden. Die Wirksamkeit des vorgeschlagenen, durchgehend trainierbaren mnemonischen und generativen Systems, wird anhand von verschiedenen 2D- und 3D-Anwendungsfällen demonstriert. Schließlich wird eine auf Convolutional Neural Networks (CNNs) basierte weiter entwickelte Pipeline vorgestellt, die die Realismuslücke aus einem neuen Blickwinkel angeht. Während die meisten Methoden, die sich mit diesem Problem befassen, sich darauf konzentrieren, synthetische Datenproben (bzw. daraus erworbenes Wissen) näher an die echte/reale Zieldomäne zu bringen, führt die vorgeschlagene Lösung den umgekehrten Prozess durch, indem ungesehene Zielbilder in den kontrollierten synthetischen Domänen abgebildet werden. Die vorbehandelten Datenproben können dann für die nachgeschalteten Erkennungsalgorithmen übergeben werden, die selbst anhand der ähnlichen synthetischen Daten trainiert wurden, um deren Genauigkeit deutlich zu verbessern. Für jeden Ansatz werden verschiedene qualitative und quantitative Studien durchgeführt, um mit sie den neuesten Methoden zu vergleichen. Insgesamt werden in dieser Arbeit Methoden zur Überbrückung der Realismuslücke auf beiden Seiten sowie eine Lösung zur Verbesserung der Erfassung und Generierung neuer visueller Inhalte beschrieben. Daher bietet diese Dissertation eine neuartige Perspektive auf die Herausforderungen der Datenknappheit bei der Entwicklung robuster Erkennungssysteme. KW - computer vision KW - machine learning KW - domain adaptation KW - realism gap KW - visual understanding Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8361 ER - TY - THES A1 - Keren, Gil T1 - Neural Network Supervision: Notes on Loss Functions, Labels and Confidence Estimation N2 - We consider a number of enhancements to the standard neural network training paradigm. First, we show that carefully designed parameter update rules may replace the need for a loss function and its gradient. We introduce a parameter update rule that generalises the standard cross-entropy gradient, and allows directly controlling the relative effect of easy and hard examples on the training process. We show that the proposed update rule cannot be derived by using a loss function and yields better classification accuracy compared to training with the standard cross-entropy loss. In addition, we study the effect of the loss function choice on the learnt representations. We introduce the Single Logit Classification (SLC) task: classifying whether a given class is the correct class for a given example, in a computationally efficient manner, based on the appropriate class logit alone. A natural principle is proposed, the Principle of Logit Separation (PoLS), as a guideline for choosing and designing loss functions suitable for the SLC task. We mathematically analyse the alignment of eleven existing and novel loss functions with this principle. Experiment results show that using loss functions that are aligned with this principle results in a representation in the logits layer in which each logit is more informative of its class correctness, leading to a considerably better SLC accuracy. Further, we attempt to alleviate the dependency of standard neural network models on large amounts of quality labels. The task of weakly supervised one-shot detection is considered, in which at training time the model is trained without any localisation labels, and at test time it needs to identify and localise instances of unseen classes. We propose the attention similarity networks (ASN) for this task. ASN use a Siamese neural network to compute a similarity score between an exemplar and different locations in a target example. Then, an attention mechanism performs localisation by learning to attend to the correct locations. The ASN model outperforms the relevant baselines for weakly supervised one-shot detection tasks in the audio and computer vision domains. Finally, we consider the problem of quantifying prediction confidence in the regression setting. We propose two novel algorithms for emitting calibrated prediction intervals for neural network regressors, at any given confidence level. The two algorithms require binning of the output space and training the neural network regressor as a classifier. Then, the calibration algorithms choose the intervals in the output space, making sure they contain the amount of posterior probability mass that results in the desired confidence level. KW - Neuronales Netz KW - Maschinelles Lernen Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8223 ER - TY - THES A1 - Koop, Martin T1 - Preventing the Leakage of Privacy Sensitive User Data on the Web N2 - Das Aufzeichnen der Internetaktivität ist mit der Verknüpfung persönlicher Daten zu einer Schlüsselressource für viele kostenpflichtige und kostenfreie Dienste im Web geworden. Diese Dienste sind zum einen Webanwendungen, wie beispielsweise die von Google bereitgestellten Karten/Navigation oder Websuche, die täglich kostenlos verwendet werden. Zum anderen sind es alle Webseiten, die meist kostenlos Nachrichten oder allgemeine Informationen zu verschiedenen Themen bereitstellen. Durch das Aufrufen und die Nutzung dieser Webdienste werden alle Informationen, die im Webdienst verarbeitet werden, an den Dienstanbieter weitergeben. Dies umfasst nicht nur die im Benutzerkonto des Webdienstes gespeicherte Profildaten wie Name oder Adresse, sondern auch die Aktivität mit dem Webdienst wie das anklicken von Links oder die Verweildauer. Darüber hinaus gibt es jedoch auch unzählige Drittparteien, welche zumeist im Hintergrund in die Webdienste eingebunden sind und das Benutzerverhalten der kompletten Webaktivität - Webseiten übergreifend - mitspeichern sowie auswerten. Der Einsatz verschiedener, in der Regel für den Benutzer verborgener Techniken, dient dazu das Online-Verhalten der Benutzer genau zu verfolgen und viele sensible Daten zu sammeln. Dieses Verhalten wird als Web-Tracking bezeichnet und wird hauptsächlich von Werbeunternehmen genutzt. Die gesammelten Daten sind oft personenbezogen und eine wertvolle Ressourcen der Unternehmen, um Beispielsweise passend zum Benutzerprofil personalisierte Werbung schalten zu können. Mit der Nutzung dieser personenbezogenen Daten entstehen aber auch weitreichendere Auswirkungen, welche sich unter anderem in Preisanpassungen für Benutzer mit speziellen Profilattributen, wie der Nutzung von teuren Endgeräten, widerspiegeln. Ziel dieser Arbeit ist es die Privatsphäre der Nutzer im Internet zu steigern und die Nutzerverfolgung von Web-Tracking signifikant zu reduzieren. Dabei stellen sich vier Herausforderungen, die jeweils einen Forschungsschwerpunkt dieser Arbeit bilden: (1) Systematische Analyse und Einordnung eingesetzter Tracking-Techniken, (2) Untersuchung vorhandener Schutzmechanismen und deren Schwachstellen,(3) Konzeption einer Referenzarchitektur zum Schutz vor Web-Tracking und (4) Entwurf einer automatisierten Testumgebungen unter Realbedingungen, um die Reduzierung von Web-Tracking in den entwickelten Schutzmaßnahmen zu untersuchen. Jeder dieser Forschungsschwerpunkte stellt neue Beiträge bereit, um einheitlich das übergeordnete Ziel zu erreichen: der Entwicklung von Schutzmaßnahmen gegen die Preisgabe sensibler Benutzerdaten im Internet. Der erste wissenschaftliche Beitrag dieser Dissertation ist eine umfassende Evaluation eingesetzter Web-Tracking Techniken und Methoden, sowie deren Gefahren, Risiken und Implikationen für die Privatsphäre der Internetnutzer. Die Evaluation beinhaltet zusätzlich die Untersuchung vorhandener Tracking-Schutzmechanismen und deren Schwachstellen. Die gewonnenen Erkenntnisse sind maßgeblich für die in dieser Arbeit neu entwickelten Ansätze und verbessern den bisherigen nicht hinreichend gewährleisteten Schutz vor Web-Tracking. Der zweite wissenschaftliche Beitrag ist die Entwicklung einer robusten Klassifizierung von Web-Tracking, der Entwurf einer effizienten Architektur zur Langzeituntersuchung von Web-Tracking sowie einer interaktiven Visualisierung des Auftreten von Web-Tracking im Internet. Dabei basiert der neue Klassifizierungsansatz, um Tracking zu identifizieren, auf der Entropie Messung des Informationsgehalts von Cookies. Die Resultate der Web-Tracking Langzeitstudien sind unter anderem 1.209 identifizierte Tracking-Domains auf den meistbesuchten Webseiten in Deutschland. Hierbei wurden innerhalb der Top 25 Webseiten im Durchschnitt 45 Tracking-Elemente pro Webseite gefunden. Der Tracker mit dem höchsten Potenzial zum Erstellen eines Benutzerprofils war doubleclick.com, da er 90% der Webseiten überwacht. Die Auswertung des untersuchten Tracking-Netzwerks ergab weiterhin einen detaillierten Einblick in die Tracking-Technik mithilfe von Weiterleitungslinks. Dabei haben wir 1,2 Millionen HTTP-Traces von monatelangen Crawls der 50.000 international meistbesuchten Webseiten analysiert. Die Ergebnisse zeigen, dass 11,6% dieser Webseiten HTTP-Redirects, verborgen in Webseiten-Links, zum Tracken verwenden. Dies wird eingesetzt, um den Webseitenverlauf des Benutzers nach dem Klick durch eine Kette von (Tracking-)Servern umzuleiten, welche in der Regel nicht sichtbar sind, bevor das beabsichtigte Link-Ziel geladen wird. In diesem Szenario erfasst der Tracker wertvolle Verbindungs-Metadaten zu Inhalt, Thema oder Benutzerinteressen der Website. Die Visualisierung des Tracking Ökosystem stellen wir in einem interaktiven Open-Source Web-Tool bereit. Der dritte wissenschaftliche Beitrag dieser Dissertation ist die Konzeption von zwei neuartigen Schutzmechanismen gegen Web-Tracking und der Aufbau einer automatisierten Simulationsumgebung unter Realbedingungen, um die Effektivität der Umsetzungen zu verifizieren. Der Fokus liegt auf den beiden meist verwendeten Tracking-Verfahren: Cookies (hierbei wird eine eindeutigen ID auf dem Gerät des Benutzers gespeichert), sowie Browser-Fingerprinting. Letzteres beschreibt eine Methode zum Sammeln einer Vielzahl an Geräteeigenschaften, um den Benutzer eindeutig zu (re- )identifizieren, ohne eine eindeutige ID auf dem Gerät zu speichern. Um die Effektivität der in dieser Arbeit entwickelten Schutzmechanismen vor Web-Tracking zu untersuchen, implementierten und evaluierten wir die Schutzkonzepte direkt im Chromium Browser. Das Ergebnis zeigt eine erfolgreiche Reduzierung von Web-Tracking um 44%. Zusätzlich verbessert das in dieser Arbeit entwickelte Konzept “Site Isolation” den Datenschutz des privaten Browsing-Modus, ermöglicht das Setzen eines manuellen Speicher-Zeitlimits von Cookies und schützt den Browser gegen verschiedene Bedrohungen wie CSRF (Cross-Site Request Forgery) oder CORS (Cross-Origin Ressource Sharing). Site Isolation speichert dabei den Status der lokalen Website in separaten Containern und kann dadurch diverse Tracking-Methoden wie Cookies, lokalStorage oder redirect tracking verhindern. Bei der Auswertung von 1,6 Millionen Webseiten haben wir gezeigt, dass der Tracker doubleclick.com das höchste Potenzial besitzt, den Nutzer zu verfolgen und auf 25% der 40.000 international meistbesuchten Webseiten vertreten ist. Schließlich demonstrieren wir in unserem erweiterten Chromium-Browser einen robusten Browser-Fingerprinting-Schutz. Der Test unseres Prototyps mittels 70.000 Browsersitzungen zeigt, dass unser Browser den Nutzer vor sogenanntem Browser-Fingerprinting Tracking schützt. Im Vergleich zu fünf anderen Browser-Fingerprint-Tools erzielte unser Prototyp die besten Ergebnisse und ist der erste Schutzmechanismus gegen Flash sowie Canvas Fingerprinting. KW - Web Tracking, Cookies, Browser Fingerprinting, Redirects, Site Isolation KW - Datenschutz KW - Computersicherheit KW - Objektverfolgung Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8717 ER - TY - THES A1 - Kasinathan, Prabhakaran T1 - Workflow-aware access control for the Internet of Things N2 - IoT is defined as a paradigm where "things" have sensing, actuating, communicating, and self-configuring abilities, and are connected to each other and to the Internet. Recent advancements in the manufacturing industry have helped to produce embedded devices with various sensors and actuators in mass numbers at a reduced cost. As part of the IoT revolution, everyday devices such as television, refrigerator, cars, even industrial machines are now connected IoT devices. Recent studies have predicted that by 2025 there will be over 75 billion of such IoT devices connected to the Internet. The providers of IoT based services want to integrate their services to satisfy customer requirements. For example, in the mobility scenario, different mobility solution providers want to offer a multi-modal ticket to their customers jointly. In such a distributed and loosely coupled environment, each owner and stakeholder wants to secure his/her own integrity, confidentiality, and functionality goals. This means that distributed rules and conditions defined by the individual owners must be enforced on the participating entities (e.g., customers or partners using their services). The owners and stakeholders may not necessarily trust each other's actions. Therefore, a mechanism is required that guarantees the rules and conditions specified by the different owners. Attacks on IoT devices and similar computing systems are increasing and getting more advanced. IoT devices are often constrained, i.e., they have limited processing power, memory, and energy. Security mechanisms designed for traditional computing systems, e.g., computers, servers, or mobile computing devices such as smartphones, may not fit in those constrained IoT devices. Weak security mechanisms and unenforced security measures were one of the main reasons for recent successful attacks on IoT devices and services. As IoT is now used in many sensitive places, including critical infrastructures, securing them becomes more critical than ever. This thesis focuses on developing mechanisms that secure IoT devices and services and enforcing the rules and conditions specified by the owners on entities that want to access owners' resources. In classical computer systems, security automata are used for specifying security policies and monitoring mechanisms are used for enforcing such policies. For instance, a reference monitor observes and stops the execution when the security policies are about to be violated, thus, the security policies are enforced. To restrict the adversary from using protected IoT devices or services for malicious purposes, it is required to ensure that a workflow must be followed to access the protected resource. In distributed IoT systems where the policies are governed by different owners, each owner would like to specify their rules and conditions in their workflows. The workflows contain tasks that must be performed in a particular order. The goal of this thesis is to develop mechanisms to specify and enforce these workflows in the distributed IoT environment. This thesis introduces a distributed WFAC framework that restricts the entities to do only what they are allowed to do in a collaborative environment. To gain access to a service protected by the WFAC framework, every workflow participant must prove that he/she is in a particular state of an authorized workflow. Authorized means two things: (a) the owner has authorized the workflow to be executed; (b) the workflow participant is authorized to execute it. This restricts the adversary's access to the devices and its services. The security policies defined by different owners are modeled as workflows and specified using Petri Nets. The policies are then enforced with the help of the WFAC framework which supports error-handling, accountability, integration of practitioner-friendly tools, and interoperability with existing security mechanisms such as OAuth. Thus, the WFAC guarantees the integrity of workflows in a distributed environment. KW - Workflow-Aware Access Control for the Internet of Things KW - Petri Nets, Blockchain, Security Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8915 ER - TY - THES A1 - Parra Rodriguez, Juan David T1 - Computational Resource Abuse in Web Applications N2 - Internet browsers include Application Programming Interfaces (APIs) to support Web applications that require complex functionality, e.g., to let end users watch videos, make phone calls, and play video games. Meanwhile, many Web applications employ the browser APIs to rely on the user's hardware to execute intensive computation, access the Graphics Processing Unit (GPU), use persistent storage, and establish network connections. However, providing access to the system's computational resources, i.e., processing, storage, and networking, through the browser creates an opportunity for attackers to abuse resources. Principally, the problem occurs when an attacker compromises a Web site and includes malicious code to abuse its visitor's computational resources. For example, an attacker can abuse the user's system networking capabilities to perform a Denial of Service (DoS) attack against third parties. What is more, computational resource abuse has not received widespread attention from the Web security community because most of the current specifications are focused on content and session properties such as isolation, confidentiality, and integrity. Our primary goal is to study computational resource abuse and to advance the state of the art by providing a general attacker model, multiple case studies, a thorough analysis of available security mechanisms, and a new detection mechanism. To this end, we implemented and evaluated three scenarios where attackers use multiple browser APIs to abuse networking, local storage, and computation. Further, depending on the scenario, an attacker can use browsers to perform Denial of Service against third-party Web sites, create a network of browsers to store and distribute arbitrary data, or use browsers to establish anonymous connections similarly to The Onion Router (Tor). Our analysis also includes a real-life resource abuse case found in the wild, i.e., CryptoJacking, where thousands of Web sites forced their visitors to perform crypto-currency mining without their consent. In the general case, attacks presented in this thesis share the attacker model and two key characteristics: 1) the browser's end user remains oblivious to the attack, and 2) an attacker has to invest little resources in comparison to the resources he obtains. In addition to the attack's analysis, we present how existing, and upcoming, security enforcement mechanisms from Web security can hinder an attacker and their drawbacks. Moreover, we propose a novel detection approach based on browser API usage patterns. Finally, we evaluate the accuracy of our detection model, after training it with the real-life crypto-mining scenario, through a large scale analysis of the most popular Web sites. KW - Web Security KW - Computational Resource Abuse KW - Crypto Currency Mining KW - Parasitic Computing KW - Computersicherheit KW - Browser Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7706 ER - TY - THES A1 - Horáček, Jan T1 - Algebraic and Logic Solving Methods for Cryptanalysis N2 - Algebraic solving of polynomial systems and satisfiability of propositional logic formulas are not two completely separate research areas, as it may appear at first sight. In fact, many problems coming from cryptanalysis, such as algebraic fault attacks, can be rephrased as solving a set of Boolean polynomials or as deciding the satisfiability of a propositional logic formula. Thus one can analyze the security of cryptosystems by applying standard solving methods from computer algebra and SAT solving. This doctoral thesis is dedicated to studying solvers that are based on logic and algebra separately as well as integrating them into one such that the combined solvers become more powerful tools for cryptanalysis. This disseration is divided into three parts. In this first part, we recall some theory and basic techniques for algebraic and logic solving. We focus mainly on DPLL-based SAT solving and techniques that are related to border bases and Gröbner bases. In particular, we describe in detail the Border Basis Algorithm and discuss its specialized version for Boolean polynomials called the Boolean Border Basis Algorithm. In the second part of the thesis, we deal with connecting solvers based on algebra and logic. The ultimate goal is to combine the strength of different solvers into one. Namely, we fuse the XOR reasoning from algebraic solvers with the light, efficient design of SAT solvers. As a first step in this direction, we design various conversions from sets of clauses to sets of Boolean polynomials, and vice versa, such that solutions and models are preserved via the conversions. In particular, based on a block-building mechanism, we design a new blockwise algorithm for the CNF to ANF conversion which is geared towards producing fewer and lower degree polynomials. The above conversions allow usto integrate both solvers via a communication interface. To reach an even tighter integration, we consider proof systems that combine resolution and polynomial calculus, i.e. the two most used proof systems in logic and algebraic solving. Based on such a proof system, which we call SRES, we introduce new types of solving algorithms that demostrate the synergy between Gröbner-like and DPLL-like solving. At the end of the second part of the dissertation, we provide some experiments based on a new benchmark which illustrate that the our new method based on DPLL has the potential to outperform CDCL SAT solvers. In the third part of the thesis, we focus on practical attacks on various cryptograhic primitives. For instance, we apply SAT solvers in the case of algebraic fault attacks on the symmetric ciphers LED and derivatives of the block cipher AES. The main goal there is to derive so-called fault equations automatically from the hardware description of the cryptosystem and thus automatizate the attack. To give some extra power to a SAT solver that inverts the hash functions SHA-1 and SHA-2, we describe how to tweak the SAT solver using a programmatic interface such that the propagation of the solver and thus the attack itself is improved. KW - Boolean polynomial KW - Border basis KW - SAT solving KW - Combined proof system KW - Algebraic normal form KW - Conjunctive normal form KW - Algebraic fault attack KW - Kryptoanalyse KW - Polynom KW - Beweissystem Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7731 ER - TY - THES A1 - Fink, Thomas T1 - Curvature Detection by Integral Transforms N2 - In various fields of image analysis, determining the precise geometry of occurrent edges, e.g. the contour of an object, is a crucial task. Especially the curvature of an edge is of great practical relevance. In this thesis, we develop different methods to detect a variety of edge features, among them the curvature. We first examine the properties of the parabolic Radon transform and show that it can be used to detect the edge curvature, as the smoothness of the parabolic Radon transform changes when the parabola is tangential to an edge and also, when additionally the curvature of the parabola coincides with the edge curvature. By subsequently introducing a parabolic Fourier transform and establishing a precise relation between the smoothness of a certain class of functions and the decay of the Fourier transform, we show that the smoothness result for the parabolic Radon transform can be translated into a change of the decay rate of the parabolic Fourier transform. Furthermore, we introduce an extension of the continuous shearlet transform which additionally utilizes shears of higher order. This extension, called the Taylorlet transform, allows for a detection of the position and orientation, as well as the curvature and other higher order geometric information of edges. We introduce novel vanishing moment conditions which enable a more robust detection of the geometric edge features and examine two different constructions for Taylorlets. Lastly, we translate the results of the Taylorlet transform in R^2 into R^3 and thereby allow for the analysis of the geometry of object surfaces. KW - Curvature KW - Wavelet KW - Shearlet KW - Parabolic Radon transform KW - Edge classification KW - Krümmung KW - Wavelet-Transformation KW - Shearlet KW - Radon-Transformation KW - Konturfindung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7684 ER - TY - THES A1 - Lucas, Yvan T1 - Credit card fraud detection using machine learning with integration of contextual knowledge N2 - We have proposed a strategy for the creation of attributes based on hidden Markov models (HMM) characterizing the transaction from different points of view. This strategy makes it possible to integrate a broad spectrum of sequential information into the attributes of transactions. In fact, we model the authentic and fraudulent behavior of merchants and card holders according to two univariate characteristics: the date and the amount of transactions. In addition, attributes based on HMMs are created in a supervised manner, thereby reducing the need for expert knowledge for the creation of the fraud detection system. Ultimately, our HMM-based multi-perspective approach allows automated data pre-processing to model time correlations to complement and eventually replace transaction aggregation strategies to improve detection efficiency. Experiments carried out on a large set of credit card transaction data from the real world (46 million transactions carried out by Belgian card holders between March and May 2015) have shown that the strategy proposed for data preprocessing based on HMM can detect more fraudulent transactions when combined with the strategy of preprocessing reference data based on expert knowledge for the detection of credit card fraud. KW - Kreditkartenmissbrauch KW - Maschinelles Lernen Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7713 ER - TY - THES A1 - Reislhuber, Josef T1 - Optical Graph Recognition N2 - Graphs are an important model for the representation of structural information between objects. One identifies objects and nodes as well as a binary relation between objects and edges. Graphs have many uses, e. g., in social sciences, life sciences and engineering. There are two primary representations: abstract and visual. The abstract representation is well suited for processing graphs by computers and is given by an adjacency list, an adjacency matrix or any abstract data structure. A visual representation is used by human users who prefer a picture. Common terms are diagram, scheme, plan, or network. The objective of Graph Drawing is to transform a graph into a visual representation called the drawing of a graph. The goal is a “nice” drawing. In this thesis we introduce Optical Graph Recognition. Optical Graph Recognition (OGR) reverses Graph Drawing and transforms a digital image of a graph into an abstract representation. Our approach consists of four phases: Preprocessing where we determine which pixels of an image are part of the graph, Segmentation where we recognize the nodes, Topology Recognition where we detect the edges and Postprocessing where we enrich the recognized graph with additional information. We apply established digital image processing methods and make use of the special property that the image contains nodes that are connected by edges. We have focused on developing algorithms that need as little parameters as possible or to automatically calibrate the parameters. Most false recognition results are caused by crossing edges as this makes tracing the edges difficult and can lead to other recognition errors. We have evaluated hand-drawn and computer-drawn graphs. Our algorithms have a very high recognition rate for computer-drawn graphs, e. g., from a set of 100000 computer-drawn graphs over 90% were correctly recognized. Most false recognition results where observed for hand-drawn graphs as they can include drawing errors and inaccuracies. For universal usability we have implemented a prototype called OGRup for mobile devices like smartphones or tablet computers. With our software it is possible to directly take a picture of a graph via a built in camera, recognize the graph, and then use the result for further processing. Furthermore, in order to gain more insight into the way a person draws a graph by hand, we have conducted a field study. KW - Bildverarbeitung KW - Graphenzeichnen Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5159 ER - TY - THES A1 - Lorenz, Florian T1 - Analyse und Erzeugung von glatten Flächenübergängen für das CNC-Fräsen N2 - In dieser Arbeit werden numerisch stabile Methoden zur Prüfung von Stetigkeiten an Flächenübergängen vorgestellt und Algorithmen zur Erzeugung von G^2-stetigen Flächenübergängen hergeleitet. KW - Differentialgeometrie KW - B-Spline KW - Geometrische Modellierung KW - CNC-Maschine Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5166 ER - TY - THES A1 - Hanauer, Kathrin T1 - Linear Orderings of Sparse Graphs N2 - The Linear Ordering problem consists in finding a total ordering of the vertices of a directed graph such that the number of backward arcs, i.e., arcs whose heads precede their tails in the ordering, is minimized. A minimum set of backward arcs corresponds to an optimal solution to the equivalent Feedback Arc Set problem and forms a minimum Cycle Cover. Linear Ordering and Feedback Arc Set are classic NP-hard optimization problems and have a wide range of applications. Whereas both problems have been studied intensively on dense graphs and tournaments, not much is known about their structure and properties on sparser graphs. There are also only few approximative algorithms that give performance guarantees especially for graphs with bounded vertex degree. This thesis fills this gap in multiple respects: We establish necessary conditions for a linear ordering (and thereby also for a feedback arc set) to be optimal, which provide new and fine-grained insights into the combinatorial structure of the problem. From these, we derive a framework for polynomial-time algorithms that construct linear orderings which adhere to one or more of these conditions. The analysis of the linear orderings produced by these algorithms is especially tailored to graphs with bounded vertex degrees of three and four and improves on previously known upper bounds. Furthermore, the set of necessary conditions is used to implement exact and fast algorithms for the Linear Ordering problem on sparse graphs. In an experimental evaluation, we finally show that the property-enforcing algorithms produce linear orderings that are very close to the optimum and that the exact representative delivers solutions in a timely manner also in practice. As an additional benefit, our results can be applied to the Acyclic Subgraph problem, which is the complementary problem to Feedback Arc Set, and provide insights into the dual problem of Feedback Arc Set, the Arc-Disjoint Cycles problem. KW - directed graph KW - graph algorithms KW - feedback arc set KW - linear ordering KW - cycle cover KW - Graphentheorie KW - Lineares Ordnungsproblem KW - Schwach besetzte Matrix Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5524 ER - TY - THES A1 - Niedermeier, Michael T1 - Towards High Performability in Advanced Metering Infrastructures N2 - The current movement towards a smart grid serves as a solution to present power grid challenges by introducing numerous monitoring and communication technologies. A dependable, yet timely exchange of data is on the one hand an existential prerequisite to enable Advanced Metering Infrastructure (AMI) services, yet on the other a challenging endeavor, because the increasing complexity of the grid fostered by the combination of Information and Communications Technology (ICT) and utility networks inherently leads to dependability challenges. To be able to counter this dependability degradation, current approaches based on high-reliability hardware or physical redundancy are no longer feasible, as they lead to increased hardware costs or maintenance, if not both. The flexibility of these approaches regarding vendor and regulatory interoperability is also limited. However, a suitable solution to the AMI dependability challenges is also required to maintain certain regulatory-set performance and Quality of Service (QoS) levels. While a part of the challenge is the introduction of ICT into the power grid, it also serves as part of the solution. In this thesis a Network Functions Virtualization (NFV) based approach is proposed, which employs virtualized ICT components serving as a replacement for physical devices. By using virtualization techniques, it is possible to enhance the performability in contrast to hardware based solutions through the usage of virtual replacements of processes that would otherwise require dedicated hardware. This approach offers higher flexibility compared to hardware redundancy, as a broad variety of virtual components can be spawned, adapted and replaced in a short time. Also, as no additional hardware is necessary, the incurred costs decrease significantly. In addition to that, most of the virtualized components are deployed on Commercial-Off-The-Shelf (COTS) hardware solutions, further increasing the monetary benefit. The approach is developed by first reviewing currently suggested solutions for AMIs and related services. Using this information, virtualization technologies are investigated for their performance influences, before a virtualized service infrastructure is devised, which replaces selected components by virtualized counterparts. Next, a novel model, which allows the separation of services and hosting substrates is developed, allowing the introduction of virtualization technologies to abstract from the underlying architecture. Third, the performability as well as monetary savings are investigated by evaluating the developed approach in several scenarios using analytical and simulative model analysis as well as proof-of-concept approaches. Last, the practical applicability and possible regulatory challenges of the approach are identified and discussed. Results confirm that—under certain assumptions—the developed virtualized AMI is superior to the currently suggested architecture. The availability of services can be severely increased and network delays can be minimized through centralized hosting. The availability can be increased from 96.82% to 98.66% in the given scenarios, while decreasing the costs by over 60% in comparison to the currently suggested AMI architecture. Lastly, the performability analysis of a virtualized service prototype employing performance analysis and a Musa-Okumoto approach reveals that the AMI requirements are fulfilled. KW - Advanced Metering Infrastructure KW - Virtualization KW - Performability KW - Energieversorgung KW - Virtualisierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8597 ER - TY - THES A1 - Schlötterer, Jörg T1 - Supporting the Discovery of Long-Tail Resources on the Web N2 - A plethora of resources made available via retrieval systems in digital libraries remains untapped in the so called long tail of the Web. These long-tail websites get considerably less visits than major Web hubs. Zero-effort queries ease the discovery of long-tail resources by proactively retrieving and presenting information based on a user’s context. However, zero-effort queries over existing digital library structures are challenging, since the underlying retrieval system is only accessible via an API. The information need must be expressed by a query, instead of optimizing the ranking between context and resources in the retrieval system directly. We address three research questions that arise from replacing the user information seeking process by zero-effort queries. Our first question addresses the transformation of a user query to an automatic query, derived from the context. We present means to 1) identify the relevant context on different levels of granularity, 2) derive an information need from the context via keyword extraction and personalization and 3) express this information need in a query scheme that avoids over- or under-specified queries. We address the cold start problem with an approach to bootstrap user profiles from social media, even for passive users. With the second question, we address the presentation of resources in zero-effort query scenarios, presenting guidelines for presentation interfaces in the browser and a visualization of the triadic relationship between context, query and results. QueryCrumbs, a compact query history visualization supports recalling information found in the past and exploratory search by visualizing qualitative and quantitative query similarity. Our last question addresses the gap between (simple) keyword queries and the representation of resources by rich and complex meta-data. We investigate and extend feature representation learning techniques centered around the skip-gram model with negative sampling. Finally, we present an approach to learn representations from network and text jointly that can cope with the partial absence of one modality. Experimental results show close to human performance of our zero-effort query and user profile generation approach and visualizations to be helpful in terms of transparency, efficiency and support for exploratory search. These results indicate that the proposed zero-effort query approach indeed eases the discovery of long-tail resources and the accompanying visualizations further facilitate this process. The joint representation model provides a first step to bridge the gap between query and resource representation and we plan to follow and investigate this route further in the future. KW - Data Sience KW - Big Data KW - Information Retrieval Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8539 ER - TY - THES A1 - Garchery, Mathieu T1 - User-centered intrusion detection using heterogeneous data N2 - With the frequency and impact of data breaches raising, it has become essential for organizations to automate intrusion detection via machine learning solutions. This generally comes with numerous challenges, among others high class imbalance, changing target concepts and difficulties to conduct sound evaluation. In this thesis, we adopt a user-centered anomaly detection perspective to address selected challenges of intrusion detection, through a real-world use case in the identity and access management (IAM) domain. In addition to the previous challenges, salient properties of this particular problem are high relevance of categorical data, limited feature availability and total absence of ground truth. First, we ask how to apply anomaly detection to IAM audit logs containing a restricted set of mixed (i.e. numeric and categorical) attributes. Then, we inquire how anomalous user behavior can be separated from normality, and this separation evaluated without ground truth. Finally, we examine how the lack of audit data can be alleviated in two complementary settings. On the one hand, we ask how to cope with users without relevant activity history ("cold start" problem). On the other hand, we seek how to extend audit data collection with heterogeneous attributes (i.e. categorical, graph and text) to improve insider threat detection. After aggregating IAM audit data into sessions, we introduce and compare general anomaly detection methods for mixed data to a user identification approach, designed to learn the distinction between normal and malicious user behavior. We find that user identification outperforms general anomaly detection and is effective against masquerades. An additional clustering step allows to reduce false positives among similar users. However, user identification is not effective against insider threats. Furthermore, results suggest that the current scope of our audit data collection should be extended. In order to tackle the "cold start" problem, we adopt a zero-shot learning approach. Focusing on the CERT insider threat use case, we extend an intrusion detection system by integrating user relations to organizational entities (like assignments to projects or teams) in order to better estimate user behavior and improve intrusion detection performance. Results show that this approach is effective in two realistic scenarios. Finally, to support additional sources of audit data for insider threat detection, we propose a method representing audit events as graph edges with heterogeneous attributes. By performing detection at fine-grained level, this approach advantageously improves anomaly traceability while reducing the need for aggregation and feature engineering. Our results show that this method is effective to find intrusions in authentication and email logs. Overall, our work suggests that masquerades and insider threats call for different detection methods. For masquerades, user identification is a promising approach. To find malicious insiders, graph features representing user context and relations to other entities can be informative. This opens the door for tighter coupling of intrusion detection with user identities, roles and privileges used in IAM solutions. KW - Anomalie KW - Authenitifikation KW - Computersicherheit Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8704 ER - TY - JOUR A1 - Mandarawi, Waseem A1 - Rottmeier, Jürgen A1 - Rezaeighale, Milad A1 - de Meer, Hermann T1 - Policy-Based Composition and Embedding of Extended Virtual Networks and SFCs for IIoT JF - Algorithms N2 - The autonomic composition of Virtual Networks (VNs) and Service Function Chains (SFCs)based on application requirements is significant for complex environments. In this paper, we use graph transformation in order to compose an Extended Virtual Network (EVN) that is based on different requirements, such as locations, low latency, redundancy, and security functions. The EVN can represent physical environment devices and virtual application and network functions. We build a generic Virtual Network Embedding (VNE) framework for transforming an Application Request (AR) to an EVN. Subsequently, we define a set of transformations that reflect preliminary topological, performance, reliability, and security policies. These transformations update the entities and demands of the VN and add SFCs that include the required Virtual Network Functions (VNFs). Additionally, we propose a greedy proactive heuristic for path-independent embedding of the composed SFCs. This heuristic is appropriate for real complex environments, such as industrial networks. Furthermore, we present an Industrail Internet of Things (IIoT) use case that was inspired by Industry 4.0 concepts,in which EVNs for remote asset management are deployed over three levels; manufacturing halls and edge and cloud computing. We also implement the developed methods in Alevin and show exemplary mapping results from our use case. Finally, we evaluate the chain embedding heuristic while using a random topology that is typical for such a use case, and show that it can improve the admission ratio and resource utilization with minimal overhead. KW - NFV KW - SFC KW - VNE KW - IIoT Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8488 SN - 1999-4893 VL - 13 IS - 9 PB - MDPI ER - TY - THES A1 - Gerl, Armin T1 - Modelling of a Privacy Language and Efficient Policy-based De-identification N2 - The processing of personal information is omnipresent in our data-driven society enabling personalized services, which are regulated by privacy policies. Although privacy policies are strictly defined by the General Data Protection Regulation (GDPR), no systematic mechanism is in place to enforce them. Especially if data is merged from several sources into a data-set with different privacy policies associated, the management and compliance to all privacy requirements is challenging during the processing of the data-set. Privacy policies can vary hereby due to different policies for each source or personalization of privacy policies by individual users. Thus, the risk for negligent or malicious processing of personal data due to defiance of privacy policies exists. To tackle this challenge, a privacy-preserving framework is proposed. Within this framework privacy policies are expressed in the proposed Layered Privacy Language (LPL) which allows to specify legal privacy policies and privacy-preserving de-identification methods. The policies are enforced by a Policy-based De-identification (PD) process. The PD process enables efficient compliance to various privacy policies simultaneously while applying pseudonymization, personal privacy anonymization and privacy models for de-identification of the data-set. Thus, the privacy requirements of each individual privacy policy are enforced filling the gap between legal privacy policies and their technical enforcement. KW - Privacy Language KW - Personal Privacy KW - Privacy-Preservation KW - GDPR KW - LPL KW - Datenschutz KW - Anonymisierung KW - Pseudonymisierung KW - Formale Sprache KW - Datenschutzgesetz Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7674 ER - TY - THES A1 - Jurgovsky, Johannes T1 - Context-Aware Credit Card Fraud Detection N2 - Credit card fraud has emerged as major problem in the electronic payment sector. In this thesis, we study data-driven fraud detection and address several of its intricate challenges by means of machine learning methods with the goal to identify fraudulent transactions that have been issued illegitimately on behalf of the rightful card owner. In particular, we explore several means to leverage contextual information beyond a transaction’s basic attributes on the transaction level, sequence level and user level. On the transaction level, we aim to identify fraudulent transactions which, in terms of their attribute values, are globally distinguishable from genuine transactions. We provide an empirical study of the influence of class imbalance and forecasting horizons on the classification performance of a random forest classifier. We augment transactions with additional features extracted from external knowledge sources and show that external information about countries and calendar events improves classification performance most noticeably on card-not-present transactions. On the sequence level, we aim to detect frauds that are inconspicuous in the background of all transactions but peculiar with respect to the short-term sequence they appear in. We use a Long Short-term Memory network (LSTM) for modeling the sequential succession of transactions. Our results suggest that LSTM-based modeling is a promising strategy for characterizing sequences of card-present transactions but it is not adequate for card-not-present transactions. On the user level, we elaborate on feature aggregations and propose a flexible concept allowing us define numerous features by means of a simple syntax. We provide a CUDA-based implementation for the computationally expensive extraction with a speed-up of two orders of magnitude over a single-core implementation. Our feature selection study reveals that aggregates extracted from users’ transaction sequences are more useful than those extracted from merchant sequences. Moreover, we discover multiple sets of candidate features with equivalent performance as manually engineered aggregates while being structurally different. Regarding future work, we motivate the usage of simple and transparent machine learning methods for credit card fraud detection and we sketch a simple user-focused modeling approach. KW - Credit Card Fraud Detection KW - Machine Learning KW - Data Augmentation KW - Feature Engineering KW - Kreditkartenmissbrauch KW - Computersicherheit Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7622 ER - TY - THES A1 - Klaus, Tina T1 - Complexity Analysis of Quantizations of Multidimensional Stochastic Differential Equations N2 - The dissertation is located in the field of quantizations of certain stochastic processes, namely a solution X of a multidimensional stochastic differential equation (SDE). The quantization problem for X consists in approximating X by a a random element which takes only finitely many values. Our main interest lies in the investigation of the asymptotic behavior of the Nth minimal quantization error of X as N tends to infinity, which incorporates the determination of both the sharp rate of convergence and explicit asymptotic constants. Especially explicit asymptotic constants have been so far unknown in the context of multidimensional SDEs. Furthermore, as part of our analysis, we provide a method which yields a strongly asymptotically optimal sequence of N-quantization of X. In certain special cases our method is fully constructive and the algorithm is easy to implement. KW - Stochastische Differentialgleichung KW - Komplexität KW - Quantifizierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7665 ER - TY - THES A1 - Charpenay, Victor T1 - Semantics for the Web of Things: Modeling the Physical World as a Collection of Things and Reasoning with their Descriptions N2 - The main research question of this thesis is to develop a theory that would provide foundations for the development of Web of Things (WoT) systems. A theory for WoT shall provide a model of the ‘things’ WoT agents relate to such that these relations determine what interactions take place between these agents. This thesis presents a knowledge-based approach in which the semantics of WoT systems is given by a transformation (an homomorphism) between a graph representing agent interactions and a knowledge graph describing ‘things’. It focuses on three aspects of knowledge graphs in particular: the vocabulary with which assertions can be made, the rules that can be defined over this vocabulary and its serialization to efficiently exchange pieces of a knowledge graph. Each aspect is developed in a dedicated chapter, with specific contributions to the state-of-the-art. The need for a unified vocabulary to describe ‘things’ in WoT and the Internet of Things (IoT) has been identified early on in the literature. Many proposals have been consequently published, in the form of Web ontologies. In Ch. 2, a systematic review of these proposals is being developed, as well as a comparison with the data models of the principal IoT frameworks and protocols. The contribution of the thesis in that respect is an alignment between the Thing Description (TD) model and the Semantic Sensor Network (SSN) ontology, two standards of the World Wide Web Consortium (W3C). The scope of this thesis is generally limited to Web standards, especially those defined by the Resource Description framework (RDF). Web ontologies do not only expose a vocabulary but also rules to extend a knowledge graph by means of reasoning. Starting from a set of TD documents, new relations between ‘things’ can be “discovered” this way, indicating possible interactions between the servients that relate to them. The experiments presented in Ch. 3 were done on the basis of this semantic discovery framework on two use cases: a building automation use case provided by Intel Labs and an industrial control use case developed internally at Siemens. The relations to discover often involve anonymous nodes in the knowledge graph: the chapter also introduces a novel skolemization algorithm to correctly process these nodes on a well-defined fragment of the Web Ontology Language (OWL). Finally, because this semantic discovery framework relies on the exchange of TD documents, Ch. 4 introduces a binary format for RDF that proves efficient in serializing TD assertions such that even the smallest WoT agents, i.e. micro-controllers, can store and process them. A formalization for the semantics-preserving compaction and querying of TD documents is also introduced in this chapter, at the basis of an embedded RDF store called the µRDF store. The ability of all WoT agents to query logical assertions about themselves and their environment, as found in TD documents, is a first step towards knowledge-based intelligent systems that can operate autonomously and dynamically in a decentralized way. The µRDF store is an attempt to illustrate the practical outcomes of the theory of WoT developed throughout this thesis. N2 - Die Dissertation entwickelt eine theoretische Grundlage für die Spezifikation Web of Things (WoT)-Systemen. Die Spezifikation der WoT-Systeme basiert auf einem Modell für die Dinge, oder Things, mit denen WoT-Agenten Beziehungen schaffen, welche Interaktionen zwischen Agenten erlauben. Diese Dissertation stellt einen wissensbasierten Ansatz vor, in dem die Semantik von WoT-Systemen als eine Transformation von einem Graphen von Agenten-Interaktionen nach einem Knowledge Graph definiert ist. Diese Arbeit deckt genau drei Aspekte Knowledge Graphs ab: das Vokabular, mit dem logische Schlüsse formuliert werden, die Regeln, die auf einem Vokabular basieren können und Serialisierung, um den effizienten Austausch zwischen Teilen eines Knowledge Graph zu ermöglichen. Alle drei Aspekt werden mit ihren wissenschaftlichen Beitrag in einem eigenen Kapitel adressiert. Der Bedarf an einem vereinigten Vokabular, um im WoT und dem Internet of Things (IoT) Things zu beschreiben, wurden in der Literatur frühzeitig identifiziert. Viele Ansatze wurden diesbezüglich vor allem als Web Ontologien veröffentlicht. Im Kapitel 2 werden diese Ansätze miteinander, sowie mit Datenmodelle dominierender IoT-Frameworks und Protokolle verglichen. Der Beitrag der Dissertation diesbezüglich ist die Verschmelzung des WoT Thing Description (TD) Modells und der Semantic Sensor Network (SSN) Ontologie, zwei vom World Wide Web Consortium (W3C) veröffentlichte Standards, in eine einzige Ontologie. Der Rahmen dieser Dissertation wird auf Web Standards begrenzt, insbesondere im Resource Description Framework (RDF) enthaltenen Standards. Web Ontologien bestehen nicht nur aus einm Vokabular, sondern auch aus Regel, um einen Knowledge Graphen durch Inferenz zu erweitern. Anhand einer Menge von TD-Dokumenten können neue Beziehungen zwischen Things abgeleitet werden und dadurch neue Interaktionen zwischen denjenigen Agenten, die sich auf diese Things beziehen eingeführt werden. Die im Kapitel 3 beschriebenen Experimente setzen dieses semantische Framework in zwei Domäne um: Gebäudeautomatisierung und Industrielle Kontrollsysteme. Das Erkennen impliziter Beziehungen zwischen Things hängt in bestimmten Fällen von sogenannten anonymen Knoten im Graphen ab: das Kapitel führt einen neuen Skolemization Algorithmus ein, um diese Knoten für einen bestimmten Teil der Web Ontology Language (OWL) korrekt zu verarbeiten. Zum Schluss, da die Umsetzung dieses semantischen Frameworks den Austausch von TD-Dokumenten erfordert, wird im Kapitel 4 ein binäres Format für RDF eingeführt, welches sich als sehr effizient für die Serialisierung erweist, damit auch kleine WoT Agenten, nämlich Mikrocontroller, TD-Dokumente speichern und verarbeiten können. Eine formale Definition für die Verdichtung und die Abfrage von TD-Dokumenten wird in diesem Kapitel eingeführt. Das Kapitel beschreibt auch die Implementierung einer eingebetteten RDF Datenbank, die µRDF Store genannt wurde. Die Fähigkeit WoT-Agenten logische Schlüsse über sich selbst und ihre Umgebung zu ziehen ist der erste Schritt in Richtung eines wissensbasierten intelligenten Systems, das autonom, dynamisch und dezentral agieren kann. Der µRDF Store zeigt die praktischen Vorteile, der in dieser Dissertation entwickelten Theorie für WoT auf. KW - Semantic Web KW - Web of Things KW - Internet of Things KW - Thing Description KW - Web Ontologies KW - Semantic Web KW - Internet der Dinge Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7578 ER - TY - THES A1 - Wahl, Florian T1 - Methods for monitoring the human circadian rhythm in free-living N2 - Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle. KW - context recognition KW - human circadian rhythm KW - machine learning KW - sleep timing KW - smart eyeglasses KW - Tagesrhythmus Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7607 ER - TY - THES A1 - Kronawitter, Stefan T1 - Automatic Performance Optimization of Stencil Codes N2 - A widely used class of codes are stencil codes. Their general structure is very simple: data points in a large grid are repeatedly recomputed from neighboring values. This predefined neighborhood is the so-called stencil. Despite their very simple structure, stencil codes are hard to optimize since only few computations are performed while a comparatively large number of values have to be accessed, i.e., stencil codes usually have a very low computational intensity. Moreover, the set of optimizations and their parameters also depend on the hardware on which the code is executed. To cut a long story short, current production compilers are not able to fully optimize this class of codes and optimizing each application by hand is not practical. As a remedy, we propose a set of optimizations and describe how they can be applied automatically by a code generator for the domain of stencil codes. A combination of a space and time tiling is able to increase the data locality, which significantly reduces the memory-bandwidth requirements: a standard three-dimensional 7-point Jacobi stencil can be accelerated by a factor of 3. This optimization can target basically any stencil code, while others are more specialized. E.g., support for arbitrary linear data layout transformations is especially beneficial for colored kernels, such as a Red-Black Gauss-Seidel smoother. On the one hand, an optimized data layout for such kernels reduces the bandwidth requirements while, on the other hand, it simplifies an explicit vectorization. Other noticeable optimizations described in detail are redundancy elimination techniques to eliminate common subexpressions both in a sequence of statements and across loop boundaries, arithmetic simplifications and normalizations, and the vectorization mentioned previously. In combination, these optimizations are able to increase the performance not only of the model problem given by Poisson’s equation, but also of real-world applications: an optical flow simulation and the simulation of a non-isothermal and non-Newtonian fluid flow. KW - Optimierung KW - Codegenerierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7618 ER - TY - THES A1 - Stadler, Thomas T1 - Eine Anwendung der Invariantentheorie auf das Korrespondenzproblem lokaler Bildmerkmale N2 - Als sich in der ersten Hälfte des 19. Jahrhunderts zunehmend mehr bedeutende Mathematiker mit der Suche nach Invarianten beschäftigten, konnte natürlich noch niemand vorhersehen, dass die Invariantentheorie mit Beginn des Computerzeitalters in der Bildverarbeitung bzw. dem Rechnersehen ein äußerst fruchtbares Anwendungsgebiet finden wird. In dieser Arbeit wird eine neue Anwendungsmöglichkeit der Invariantentheorie in der Bildverarbeitung vorgestellt. Dazu werden lokale Bildmerkmale betrachtet. Dabei handelt es sich um die Koordinaten einer Polynomfunktion bzgl. einer geeigneten Orthonormalbasis von P_n(R^2,R), die die zeitintegrierte Sensorinputfunktion auf lokalen Pixelfenstern bestmöglich approximiert. Diese Bildmerkmale werden in vielen Anwendungen eingesetzt, um Objekte in Bildern zu erkennen und zu lokalisieren. Beispiele hierfür sind die Detektion von Werkstücken an einem Fließband oder die Verfolgung von Fahrbahnmarkierungen in Fahrerassistenzsystemen. Modellieren lässt sich die Suche nach einem Muster in einem Suchbild als Paar von Stereobildern, auf denen lokal die affin-lineare Gruppe AGL(R) operiert. Will man also feststellen, ob zwei lokale Pixelfenster in etwa Bilder eines bestimmten dreidimensionalen Oberflächenausschnitts sind, ist zu klären, ob die Bildausschnitte durch eine Operation der Gruppe AGL(R) näherungsweise ineinander übergeführt werden können. Je nach Anwendung genügt es bereits, passende Untergruppen G von AGL(R) zu betrachten. Dank der lokalen Approximation durch Polynomfunktionen induziert die Operation einer Untergruppe G eine Operation auf dem reellen Vektorraum P_n(R^2,R). Damit lässt sich das Korrespondenzproblem auf die Frage reduzieren, ob es eine Transformation T in G gibt so, dass p ungefähr mit der Komposition von q und T für die zugehörigen Approximationspolynome p,q in P_n(R^2,R) gilt. Mit anderen Worten, es ist zu klären, ob sich p und q näherungsweise in einer G-Bahn befinden, eine typische Fragestellung der Invariantentheorie. Da nur lokale Bildausschnitte betrachtet werden, genügt es weiter, Untergruppen G von GL_2(R) zu betrachten. Dann erhält man sofort auch die Antwort für das semidirekte Produkt von R^2 mit G. Besonders interessant für Anwendungen ist hierbei die spezielle orthogonale Gruppe G=SO_2(R) und damit insgesamt die eigentliche Euklidische Gruppe. Für diese Gruppe und spezielle Pixelfenster ist das Korrespondenzproblem bereits gelöst. In dieser Arbeit wird das Problem in eben dieser Konstellation ebenfalls gelöst, allerdings auf elegante Weise mit Methoden der Invariantentheorie. Der Ansatz, der hier vorgestellt wird, ist aber nicht auf diese Gruppe und spezielle Pixelfenster begrenzt, sondern leicht auf weitere Fälle erweiterbar. Dazu ist insbesondere zu klären, wie sich sogenannte fundamentale Invarianten von lokalen Bildmerkmalen, also letztendlich Invarianten von Polynomfunktionen, berechnen lassen, d.h. Erzeugendensysteme der entsprechenden Invariantenringe. Mit deren Hilfe lässt sich die Zugehörigkeit einer Polynomfunktion zur Bahn einer anderen Funktion auf einfache Weise untersuchen. Neben der Vorstellung des Verfahrens zur Korrespondenzfindung und der dafür notwendigen Theorie werden in dieser Arbeit Erzeugendensysteme von Invariantenringen untersucht, die besonders "schöne" Eigenschaften besitzen. Diese schönen Erzeugendensysteme von Unteralgebren werden, analog zu Gröbner-Basen als Erzeugendensysteme von Idealen, SAGBI-Basen genannt ("Subalgebra Analogs to Gröbner Bases for Ideals"). SAGBI-Basen werden hier insbesondere aus algorithmischer Sicht behandelt, d.h. die Berechnung von SAGBI-Basen steht im Vordergrund. Dazu werden verschiedene Algorithmen erarbeitet, deren Korrektheit bewiesen und implementiert. Daraus resultiert ein Software-Paket zu SAGBI-Basen für das Computeralgebrasystem ApCoCoA, dessen Funktionalität in diesem Umfang in keinem Computeralgebrasystem zu finden sein wird. Im Zuge der Umsetzung der einzelnen Algorithmen konnte außerdem die Theorie der SAGBI-Basen an zahlreichen Stellen erweitert werden. KW - Computeralgebra KW - SAGBI-Basen KW - Invariantentheorie KW - Bildverarbeitung KW - (lokale) Bildmerkmale Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4026 ER - TY - THES A1 - de Ponte Müller, Fabian T1 - Cooperative Relative Positioning for Vehicular Environments N2 - Fahrerassistenzsysteme sind ein wesentlicher Baustein zur Steigerung der Sicherheit im Straßenverkehr. Vor allem sicherheitsrelevante Applikationen benötigen eine genaue Information über den Ort und der Geschwindigkeit der Fahrzeuge in der unmittelbaren Umgebung, um mögliche Gefahrensituationen vorherzusehen, den Fahrer zu warnen oder eigenständig einzugreifen. Repräsentative Beispiele für Assistenzsysteme, die auf eine genaue, kontinuierliche und zuverlässige Relativpositionierung anderer Verkehrsteilnehmer angewiesen sind, sind Notbremsassitenten, Spurwechselassitenten und Abstandsregeltempomate. Moderne Lösungsansätze benutzen Umfeldsensorik wie zum Beispiel Radar, Laser Scanner oder Kameras, um die Position benachbarter Fahrzeuge zu schätzen. Dieser Sensorsysteme gemeinsame Nachteile sind deren limitierte Erfassungsreichweite und die Notwendigkeit einer direkten und nicht blockierten Sichtlinie zum Nachbarfahrzeug. Kooperative Lösungen basierend auf einer Fahrzeug-zu-Fahrzeug Kommunikation können die eigene Wahrnehmungsreichweite erhöhen, in dem Positionsinformationen zwischen den Verkehrsteilnehmern ausgetauscht werden. In dieser Dissertation soll die Möglichkeit der kooperativen Relativpositionierung von Straßenfahrzeugen mittels Fahrzeug-zu-Fahrzeug Kommunikation auf ihre Genauigkeit, Kontinuität und Robustheit untersucht werden. Anstatt die in jedem Fahrzeug unabhängig ermittelte Position zu übertragen, werden in einem neuartigem Ansatz GNSS-Rohdaten, wie Pseudoranges und Doppler-Messungen, ausgetauscht. Dies hat den Vorteil, dass sich korrelierte Fehler in beiden Fahrzeugen potentiell herauskürzen. Dies wird in dieser Dissertation mathematisch untersucht, simulativ modelliert und experimentell verifiziert. Um die Zuverlässigkeit und Kontinuität auch in "gestörten" Umgebungen zu erhöhen, werden in einem Bayesischen Filter die GNSS-Rohdaten mit Inertialsensormessungen aus zwei Fahrzeugen fusioniert. Die Validierung des Sensorfusionsansatzes wurde im Rahmen dieser Dissertation in einem Verkehrs- sowie in einem GNSS-Simulator durchgeführt. Zur experimentellen Untersuchung wurden zwei Testfahrzeuge mit den verschiedenen Sensoren ausgestattet und Messungen in diversen Umgebungen gefahren. In dieser Arbeit wird gezeigt, dass auf Autobahnen, die Relativposition eines anderen Fahrzeugs mit einer Genauigkeit von unter einem Meter kontinuierlich geschätzt werden kann. Eine hohe Zuverlässigkeit in der longitudinalen und lateralen Richtung können erzielt werden und das System erweist 90% der Zeit eine Unsicherheit unter 2.5m. In ländlichen Umgebungen wächst die Unsicherheit in der relativen Position. Mit Hilfe der on-board Sensoren können Fehler bei der Fahrt durch Wälder und Dörfer korrekt gestützt werden. In städtischen Umgebungen werden die Limitierungen des Systems deutlich. Durch die erschwerte Schätzung der Fahrtrichtung des Ego-Fahrzeugs ist vor Allem die longitudinale Komponente der Relativen Position in städtischen Umgebungen stark verfälscht. N2 - Advanced driver assistance systems play an important role in increasing the safety on today's roads. The knowledge about the other vehicles' positions is a fundamental prerequisite for numerous safety critical applications, making it possible to foresee critical situations, warn the driver or autonomously intervene. Forward collision avoidance systems, lane change assistants or adaptive cruise control are examples of safety relevant applications that require an accurate, continuous and reliable relative position of surrounding vehicles. Currently, the positions of surrounding vehicles is estimated by measuring the distance with e.g. radar, laser scanners or camera systems. However, all these techniques have limitations in their perception range, as all of them can only detect objects in their line-of-sight. The limited perception range of today's vehicles can be extended in future by using cooperative approaches based on Vehicle-to-Vehicle (V2V) communication. In this thesis, the capabilities of cooperative relative positioning for vehicles will be assessed in terms of its accuracy, continuity and reliability. A novel approach where Global Navigation Satellite System (GNSS) raw data is exchanged between the vehicles is presented. Vehicles use GNSS pseudorange and Doppler measurements from surrounding vehicles to estimate the relative positioning vector in a cooperative way. In this thesis, this approach is shown to outperform the absolute position subtraction as it is able to effectively cancel out common errors to both GNSS receivers. This is modeled theoretically and demonstrated empirically using simulated signals from a GNSS constellation simulator. In order to cope with GNSS outages and to have a sufficiently good relative position estimate even in strong multipath environments, a sensor fusion approach is proposed. In addition to the GNSS raw data, inertial measurements from speedometers, accelerometers and turn rate sensors from each vehicle are exchanged over V2V communication links. A Bayesian approach is applied to consider the uncertainties inherently to each of the information sources. In a dynamic Bayesian network, the temporal relationship of the relative position estimate is predicted by using relative vehicle movement models. Also real world measurements in highway, rural and urban scenarios are performed in the scope of this work to demonstrate the performance of the cooperative relative positioning approach based on sensor fusion. The results show that the relative position of another vehicle towards the ego vehicle can be estimated with sub-meter accuracy in highway scenarios. Here, good reliability and 90% availability with an uncertainty of less than 2.5m is achieved. In rural environments, drives through forests and towns are correctly bridged with the support of on-board sensors. In an urban environment, the difficult estimation of the ego vehicle heading has a mayor impact in the relative position estimate, yielding large errors in its longitudinal component. KW - Fahrerassistenzsystem Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5411 ER - TY - THES A1 - Fischer, Andreas T1 - An Evaluation Methodology for Virtual Network Embedding N2 - The increasing scale and complexity of computer networks imposes a need for highly flexible management mechanisms. The concept of network virtualization promises to provide this flexibility. Multiple arbitrary virtual networks can be constructed on top of a single substrate network. This allows network operators and service providers to tailor their network topologies to the specific needs of any offered service. However, the assignment of resources proves to be a problem. Each newly defined virtual network must be realized by assigning appropriate physical resources. For a given set of virtual networks, two questions arise: Can all virtual networks be accommodated in the given substrate network? And how should the respective resources be assigned? The underlying problem is commonly known as the Virtual Network Embedding problem. A multitude of algorithms has already been proposed, aiming to provide solutions to that problem under various constraints. For the evaluation of these algorithms typically an empirical approach is adopted, using artificially created random problem instances. However, due to complex effects of random problem generation the obtained results can be hard to interpret correctly. A structured evaluation methodology that can avoid these effects is currently missing. This thesis aims to fill that gap. Based on a thorough understanding of the problem itself, the effects of random problem generation are highlighted. A new simulation architecture is defined, increasing the flexibility for experimentation with embedding algorithms. A novel way of generating embedding problems is presented which migitates the effects of conventional problem generation approaches. An evaluation using these newly defined concepts demonstrates how new insights on algorithm behavior can be gained. The proposed concepts support experimenters in obtaining more precise and tangible evaluation data for embedding algorithms. KW - Virtual Network Embedding KW - Empirical Evaluation KW - Network Virtualization KW - Experimental Algorithmics KW - Virtuelles Netz KW - Virtualisierung KW - Algorithmus KW - Virtuelles Netz KW - Kombinatorische Einbettung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4793 ER - TY - THES A1 - Löwe, Stefan T1 - Effective Approaches to Abstraction Refinement for Automatic Software Verification N2 - This thesis presents various techniques that aim at enabling more effective and more efficient approaches for automatic software verification. After a brief motivation why automatic software verification is getting ever more relevant, we continue with detailing the formalism used in this thesis and on the concepts it is built on. We then describe the design and implementation of the value analysis, an analysis for automatic software verification that tracks state information concretely. From a thorough evaluation based on well over 4 000 verification tasks from the latest edition of the International Competition on Software Verification (SV-COMP), we learn that this plain value analysis leads to an efficient verification process for many verification tasks, but at the same time, fails to solve other verification tasks due to state-space explosion. From this insight we infer that some form of abstraction technique must be added to the value analysis in order to also allow the successful verification of large and complex verification tasks. As a solution, we propose to incorporate counterexample-guided abstraction refinement (CEGAR) and interpolation into the value domain. To this end, we design a novel interpolation procedure, that extracts from infeasible counterexamples interpolants for the value domain, allowing to form a precision strong enough to exclude these infeasible counterexamples, and to make progress in the CEGAR loop. We then describe several optimizations and extensions to these concepts, such that the value analysis with CEGAR becomes competitive for automatic software verification. As the next step, we combine the value analysis with CEGAR with a predicate analysis, to obtain a more precise and efficient composite analysis based on CEGAR. This composite analysis is indeed on a par with the world’s leading software verification tools, as witnessed by the results of SV-COMP’13 where this approach achieved the 2 nd place in the overall ranking. After having available competitive CEGAR-based analyses for the value domain, the predicate domain, and the combination thereof, we then turn our attention to techniques that have the goal to make all these CEGAR-based approaches more successful. Our first novel idea in this regard is based on the concept of infeasible sliced prefixes, which allow the computation of different precisions from a single infeasible counterexample. This adds choice to the CEGAR loop, while without this enhancement, no choice for a specific precision, i. e., a specific refinement, is possible. In our evaluation we show, for both the value analysis and the predicate analysis, that choosing different infeasible sliced prefixes during the refinement step leads to major differences in verification effectiveness and verification efficiency. Extending on the concept of infeasible sliced prefixes, we define several heuristics in order to precisely select a single refinement from a set of possible refinements. We make this new concept, which we refer to as guided refinement selection, available to both the value and predicate analysis, and in a large-scale evaluation we try to answer the question which selection technique leads to well suited abstractions and thus, to a more effective verification process. Additionally, we present the idea of inter-analysis refinement selection, where the refinement component of a composite analysis may decide which of its component analyses is best to be refined, and in yet another evaluation we highlight the positive effects of this technique. Finally, we present the results of SV-COMP’16, where the verifier we contributed and which is based on the concepts and ideas presented in this thesis achieved the 1 st place in the category DeviceDriversLinux64. KW - software verification, model checking, counterexample guided abstraction refinement, CEGAR, interpolation, sliced prefixes, refinement selection, value analysis, predicate analysis, CPAchecker, automatic, automated KW - Programmverifikation Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4815 ER - TY - THES A1 - Petit, Albin T1 - Introducing Privacy in Current Web Search Engines N2 - During the last few years, the technological progress in collecting, storing and processing a large quantity of data for a reasonable cost has raised serious privacy issues. Privacy concerns many areas, but is especially important in frequently used services like search engines (e.g., Google, Bing, Yahoo!). These services allow users to retrieve relevant content on the Internet by exploiting their personal data. In this context, developing solutions to enable users to use these services in a privacy-preserving way is becoming increasingly important. In this thesis, we introduce SimAttack an attack against existing protection mechanism to query search engines in a privacy-preserving way. This attack aims at retrieving the original user query. We show with this attack that three representative state-of-the-art solutions do not protect the user privacy in a satisfactory manner. We therefore develop PEAS a new protection mechanism that better protects the user privacy. This solution leverages two types of protection: hiding the user identity (with a succession of two nodes) and masking users' queries (by combining them with several fake queries). To generate realistic fake queries, PEAS exploits previous queries sent by the users in the system. Finally, we present mechanisms to identify sensitive queries. Our goal is to adapt existing protection mechanisms to protect sensitive queries only, and thus save user resources (e.g., CPU, RAM). We design two modules to identify sensitive queries. By deploying these modules on real protection mechanisms, we establish empirically that they dramatically improve the performance of the protection mechanisms. KW - Privacy KW - Search Engine KW - Unlinkability KW - Indistinguishability KW - Suchmaschine KW - Datensicherung KW - Computersicherheit KW - Privatsphäre Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4652 ER - TY - THES A1 - Joblin, Mitchell T1 - Structural and Evolutionary Analysis of Developer Networks N2 - Large-scale software engineering projects are often distributed among a number sites that are geographically separated by a substantial distance. In globally distributed software projects, time zone issues, language and cultural barriers, and a lack of familiarity among members of different sites all introduce coordination complexity and present significant obstacles to achieving a coordinated effort. For large-scale software engineering projects to satisfy their scheduling and quality goals, many developers must be capable of completing work items in parallel. A key factor to achieving this goal is to remove interdependencies among work items insofar as possible. By applying principles of modularity, work item interdependence can be reduced, but not removed entirely. As a result of uncertainty during the design and implementation phases and incomplete or misunderstood design intents, dependencies between work items inevitably arises and leads to requirements for developers to coordinate. The capacity of a project to satisfy coordination needs depends on how the work items are distributed among developers and how developers are organizationally arranged, among other factors. When coordination requirements fail to be recognized and appropriately managed, anecdotal evidence and prior empirical studies indicate that this condition results in decreased product quality and developer productivity. In essence, properties of the socio-technical environment, comprised of developers and the tasks they must complete, provides important insights concerning the project's capacity to meet product quality and scheduling goals. In this dissertation, we make contributions to support socio-technical analyses of software projects by developing approaches for abstracting and analyzing the technical and social activities of developers. More specifically, we propose a fine-grained, verifiable, and fully automated approach to obtain a proper view on developer coordination, based on commit information and source-code structure, mined from version-control systems. We apply methodology from network analysis and machine learning to identify developer communities automatically. To evaluate our approach, we analyze ten open-source projects with complex and active histories, written in various programming languages. By surveying 53 open-source developers from the ten projects, we validate the accuracy of the extracted developer network and the authenticity of the inferred community structure. Our results indicate that developers of open-source projects form statistically significant community structures and this particular network view largely coincides with developers' perceptions. Equipped with a valid network view on developer coordination, we extend our approach to analyze the evolutionary nature of developer coordination. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. We found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which highly central developers are hierarchically arranged and all other developers are not. Our results suggest that the organizational structure of large software projects is constrained to evolve towards a state that balances the costs and benefits of coordination, and the mechanisms used to achieve this state depend on the project's scale. As a final contribution, we use developer networks to establish a richer understanding of the different roles that developers play in a project. Developers of open-source projects are often classified according to core and peripheral roles. Typically, count-based operationalizations, which rely on simple counts of individual developer activities (e.g., number of commits), are used for this purpose, but there is concern regarding their validity and ability to elicit meaningful insights. To shed light on this issue, we investigate whether count-based operationalizations of developer roles produce consistent results, and we validate them with respect to developers' perceptions by surveying 166 developers. We improve over the state of the art by proposing a relational perspective on developer roles, using our fine-grained developer networks, and by examining developer roles in terms of developers' positions and stability within the developer network. In a study of 10 substantial open-source projects, we found that the primary difference between the count-based and our proposed network-based core--peripheral operationalizations is that the network-based ones agree more with developer perception than count-based ones. Furthermore, we demonstrate that a relational perspective can reveal further meaningful insights, such as that core developers exhibit high positional stability, upper positions in the hierarchy, and high levels of coordination with other core developers, which confirms assumptions of previous work. Overall, our research demonstrates that data stored in software repositories, paired with appropriate analysis approaches, can elicit valuable, practical, and valid insights concerning socio-technical aspects of software development. KW - Software Engineering Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4616 ER - TY - THES A1 - Sipal, Bilge T1 - Border Basis Schemes N2 - The basic idea of border basis theory is to describe a zero-dimensional ring P/I by an order ideal of terms whose residue classes form a K-vector space basis of P/I. The O-border basis scheme is a scheme that parametrizes all zero-dimensional ideals that have an O-border basis. In general, the O-border basis scheme is not an affine space. Subsequently, in [Huib09] it is proved that if an order ideal with "d" elements is defined in a two-dimensional polynomial ring and it is of some special shapes, then the O-border basis scheme is isomorphic to the affine space of dimension 2d. This thesis is dedicated to find a more general condition for an O-border basis scheme to be isomorphic to an affine space of dimension "nd" that is independent of the shape of the order ideal with "d" elements and "n" is the dimension of the polynomial ring that the order ideal is defined in. We accomplish this in 6 Chapters. In Chapters 2 and 3 we develop the concepts and properties of border basis schemes. In Chapter 4 we transfer the smoothness criterion (see [Huib05]) for the point (0,...,0) in a Hilbert scheme of points to the monomial point of the border basis scheme by employing the tools from border basis theory. In Chapter 5 we explain trace and Jacobi identity syzygies of the defining equations of a O-border basis scheme and characterize them by the arrow grading. In Chapter 6 we give a criterion for the isomorphism between 2d dimensional affine space and O-border basis scheme by using the results from Chapters 3 and Chapter 4. The techniques from other chapters are applied in Chapter 6.1 to segment border basis schemes and in Chapter 6.2 to O-border basis schemes for which O is of the sawtooth form. KW - Border Bases, Border Basis Scheme, Monomial point, Cotangent Space, Hilbert Schemes KW - Polynomring KW - Basis (Mathematik) KW - Kommutative Algebra, Randbasen, Randbasen Schema Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4702 ER - TY - THES A1 - Handeck, Jörg T1 - Analyse und Korrektur von Teileprogrammen für numerisch gesteuerte Werkzeugmaschinen N2 - Splinekurven sind oft das erste Mittel der Wahl, wenn Daten interpoliert oder approximiert werden sollen. Sie spielen in vielen praktischen Anwendungsbereichen eine wichtige Rolle und sind in Bereichen des CAD/CAM nicht mehr weg zu denken. In der vorliegenden Arbeit werden in diesem Kontext Bahnpunkte zur Steuerung von Werkzeugmaschinen untersucht. Die Analyse wird mit Hilfe eines Multiresolution (MRA) Ansatzes für Splinekurven mit adaptiven Knotenfolgen realisiert. Dieser MRA Ansatz basiert auf einer Least-Squares-Projektion zum Knotenentfernen und unterscheidet sich somit zu bekannten Ansätzen, die auf orthogonalen Komplementen aufbauen. Des Weiteren wird ein Konzept zur Approximation von Orientierungsdaten mittels homogenen Quaternionensplines vorgestellt. Diese Splines leben auf der Sphäre und lassen sich mittels Knotenentfernen bzw. einfügen verfeinern. Somit lässt sich das vorgestellte MRA–Analyseverfahren ebenfalls auf diese Kurven anwenden. Weiter konnte für diese Kurven eine konvexe Hülle–Eigenschaft nachgewiesen werden. KW - Spline MRA KW - Spline KW - Numerische Steuerung KW - Werkzeugmaschine Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4546 ER - TY - THES A1 - Zwicklbauer, Stefan T1 - Robust Entity Linking in Heterogeneous Domains N2 - Entity Linking is the task of mapping terms in arbitrary documents to entities in a knowledge base by identifying the correct semantic meaning. It is applied in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question and Answering. Most existing Entity Linking systems were optimized for specific domains (e.g., general domain, biomedical domain), knowledge base types (e.g., DBpedia, Wikipedia), or document structures (e.g., tables) and types (e.g., news articles, tweets). This led to very specialized systems that lack robustness and are only applicable for very specific tasks. In this regard, this work focuses on the research and development of a robust Entity Linking system in terms of domains, knowledge base types, and document structures and types. To create a robust Entity Linking system, we first analyze the following three crucial components of an Entity Linking algorithm in terms of robustness criteria: (i) the underlying knowledge base, (ii) the entity relatedness measure, and (iii) the textual context matching technique. Based on the analyzed components, our scientific contributions are three-fold. First, we show that a federated approach leveraging knowledge from various knowledge base types can significantly improve robustness in Entity Linking systems. Second, we propose a new state-of-the-art, robust entity relatedness measure for topical coherence computation based on semantic entity embeddings. Third, we present the neural-network-based approach Doc2Vec as a textual context matching technique for robust Entity Linking. Based on our previous findings and outcomes, our main contribution in this work is DoSeR (Disambiguation of Semantic Resources). DoSeR is a robust, knowledge-base-agnostic Entity Linking framework that extracts relevant entity information from multiple knowledge bases in a fully automatic way. The integrated algorithm represents a collective, graph-based approach that utilizes semantic entity and document embeddings for entity relatedness and textual context matching computation. Our evaluation shows, that DoSeR achieves state-of-the-art results over a wide range of different document structures (e.g., tables), document types (e.g., news documents) and domains (e.g., general domain, biomedical domain). In this context, DoSeR outperforms all other (publicly available) Entity Linking algorithms on most data sets. KW - Entity Linking KW - Neural Networks KW - Knowledge Bases KW - Linked Data KW - Wissensbasiertes System Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5047 ER - TY - THES A1 - Wendler, Philipp T1 - Towards Practical Predicate Analysis N2 - Software model checking is a successful technique for automated program verification. Several of the most widely used approaches for software model checking are based on solving first-order-logic formulas over predicates using SMT solvers, e.g., predicate abstraction, bounded model checking, k-induction, and lazy abstraction with interpolants. We define a configurable framework for predicate-based analyses that allows expressing each of these approaches. This unifying framework highlights the differences between the approaches, producing new insights, and facilitates research of further algorithms and their combinations, as witnessed by several research projects that have been conducted on top of this framework. In addition to this theoretical contribution, we provide a mature implementation of our framework in the software verifier that allows applying all of the mentioned approaches to practice. This implementation is used by other research groups, e.g., to find bugs in the Linux kernel, and has proven its competitiveness by winning gold medals in the International Competition on Software Verification. Tools and approaches for software model checking like our predicate analysis are typically evaluated using performance benchmarking on large sets of verification tasks. We have identified several pitfalls that can silently arise during benchmarking, and we have found that the benchmarking techniques and tools that are used by many researchers do not guarantee valid results in practice, but may produce arbitrarily large measurement errors. Furthermore, certain hardware characteristics can also have nondeterministic influence on the measurements. In order to being able to properly evaluate our framework for software verification, we study the effects of these hardware characteristics, and define a list of the most important requirements that need to be ensured for reliable benchmarking. We present as solution an open-source benchmarking framework BenchExec, which in contrast to other benchmarking tools fulfills all our requirements and aims at making reliable benchmarking easy. BenchExec was already adopted by several research groups and the International Competition on Software Verification. Using the power of BenchExec we conduct an experimental evaluation of our unifying framework for predicate analysis. We study the effect of varying the SMT solver and the way program semantics are encoded in formulas across several verification algorithms and find that these technical choices can significantly influence the results of experimental studies of verification approaches. This is valuable information for both researchers who study verification approaches as well as for users who apply them in practice. Our comprehensive study of 120 different configurations would not have been possible without our highly flexible and configurable unifying framework for predicate analysis and shows that the latter is a valuable base for conducting experiments. Furthermore, we show using a comparison against top-ranking verifiers from the International Competition on Software Verification that our implementation is highly competitive and can outperform the state of the art. KW - Programmverifikation Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5098 ER - TY - THES A1 - He, Xiaobing T1 - Threat Assessment for Multistage Cyber Attacks in Smart Grid Communication Networks N2 - In smart grids, managing and controlling power operations are supported by information and communication technology (ICT) and supervisory control and data acquisition (SCADA) systems. The increasing adoption of new ICT assets in smart grids is making smart grids vulnerable to cyber threats, as well as raising numerous concerns about the adequacy of current security approaches. As a single act of penetration is often not sufficient for an attacker to achieve his/her goal, multistage cyber attacks may occur. Due to the interdependence between the power grid and the communication network, a multistage cyber attack not only affects the cyber system but impacts the physical system. This thesis investigates an application-oriented stochastic game-theoretic cyber threat assessment framework, which is strongly related to the information security risk management process as standardized in ISO/IEC 27005. The proposed cyber threat assessment framework seeks to address the specific challenges (e.g., dynamic changing attack scenarios and understanding cascading effects) when performing threat assessments for multistage cyber attacks in smart grid communication networks. The thesis looks at the stochastic and dynamic nature of multistage cyber attacks in smart grid use cases and develops a stochastic game-theoretic model to capture the interactions of the attacker and the defender in multistage attack scenarios. To provide a flexible and practical payoff formulation for the designed stochastic game-theoretic model, this thesis presents a mathematical analysis of cascading failure propagation (including both interdependency cascading failure propagation and node overloading cascading failure propagation) in smart grids. In addition, the thesis quantifies the characterizations of disruptive effects of cyber attacks on physical power grids. Furthermore, this thesis discusses, in detail, the ingredients of the developed stochastic game-theoretic model and presents the implementation steps of the investigated stochastic game-theoretic cyber threat assessment framework. An application of the proposed cyber threat assessment framework for evaluating a demonstrated multistage cyber attack scenario in smart grids is shown. The cyber threat assessment framework can be integrated into an existing risk management process, such as ISO 27000, or applied as a standalone threat assessment process in smart grid use cases. KW - Smart Grids KW - Game Theory KW - Cascading Failures KW - Threat Assessment KW - Communication Networks KW - Intelligentes Stromnetz KW - Sicherheit KW - Spieltheorie Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5051 ER - TY - THES A1 - Ruppert, Julia T1 - Asymptotic Expansion for the Time Evolution of the Probability Distribution Given by the Brownian Motion on Semialgebraic Sets N2 - In this thesis, we examine whether the probability distribution given by the Brownian Motion on a semialgebraic set is definable in an o-minimal structure and we establish asymptotic expansions for the time evolution. We study the probability distribution as an example for the occurrence of special parameterized integrals of a globally subanalytic function and the exponential function of a globally subanalytic function. This work is motivated by the work of Comte, Lion and Rolin, which considered parameterized integrals of globally subanalytic functions, of Cluckers and Miller, which examined parameterized integrals of constructible functions, and by the work of Cluckers, Comte, Miller, Rolin and Servi, which treated oscillatory integrals of globally subanalytic functions. In the one dimensional case we show that the probability distribution on a family of sets, which are definable in an o-minimal structure, are definable in the Pfaffian closure. In the two-dimensional case we investigate asymptotic expansions for the time evolution. As time t approaches zero, we show that the integrals behave like a Puiseux series, which is not necessarily convergent. As t tends towards infinity, we show that the probability distribution is definable in the expansion of the real ordered field by all restricted analytic functions if the semialgebraic set is bounded. For this purpose, we apply results for parameterized integrals of globally subanalytic functions of Lion and Rolin. By establishing the asymptotic expansion of the integrals over an unbounded set, we demonstrate that this expansion has the form of convergent Puiseux series with negative exponents and their logarithm. Subsequently, we get that the asymptotic expansion is definable in an o-minimal structure. Finally, we study the three-dimensional case and give the proof that the probability distribution given by the Brownian Motion behaves like a Puiseux series as time t tends towards zero. As t approaches infinity and the semialgebraic set is bounded, it can be ascertained that the probability distribution has the form of a constructible function by results of Cluckers and Miller and therefore it is definable in an o-minimal structure. If the semialgebraic set is unbounded, we establish the asymptotic expansions and prove that the probability distribution given by the Brownian Motion on unbounded sets has an asymptotic expansion of the form of a constructible function. In consequence of that, the asymptotic expansion is definable in an o-minimal structure. KW - o-minimality KW - asymptotic expansions KW - globally subanalytic sets KW - exponential parameterized integrals KW - Brownian Motion KW - Brownsche Bewegung KW - O-Minimalität KW - Asymptotische Entwicklung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5069 ER - TY - THES A1 - Klaghstan, Merza T1 - Multimedia data dissemination in opportunistic systems N2 - Opportunistic networks (OppNets) are human-centric mobile ad-hoc networks, in which neither the topology nor the participating nodes are known in advance. Routing is dynamically planned following the store-carry-and-forward paradigm, which takes advantage of people mobility. This widens the range of communication and supports indirect end-to-end data delivery. But due to individuals’ mobility, OppNets are characterized by frequent communication disruptions and uncertain data delivery. Hence, these networks are mostly used for exchanging small messages like disaster alarms or traffic notifications. Other scenarios that require the exchange of larger data (e.g. video) are still challenging due to the characteristics of this kind of networks. However, there are still multimedia sharing scenarios where a user might need switching from infrastructural communications to an ad-hoc alternative. Examples are the cases of 1) absence of infrastructural networks in far rural areas, 2) high costs due to roaming or limited data volumes or 3) undesirable censorship by third parties while exchanging sensitive content. Consequently, we target in this thesis a video dissemination scheme in OppNets. For the video delivery problem in the sparse opportunistic networks, we propose a solution with the objective of reducing the video playout delay, so that enabling the recipient to play the video content as soon as possible even if at a low quality. Furthermore, the received video reaches later a higher quality level, ensuring a better viewing experience. The proposed solution encloses three contributions. The first one is given by granulating the videos at the source node into smaller parts, and associating them with unequal redundancy degrees. This is technically based on using the Scalable Video Coding (SVC), which encodes a video into several layers of unequal importance for viewing the content at different quality levels. Layers are routed using the Spray-and-Wait routing protocol, with different redundancy factors for the different layers depending on their importance degree. In this context as well, a video viewing QoE metric is proposed, which takes the values of the perceived video quality, delivery delay and network overhead into consideration, and on a scalable basis. Second, we take advantage of the small units of the Network Abstraction Layer (NAL), which compose SVC layers. NAL units are packetized together under specific size constraints to optimize granularity. Packets sizes are tuned in an adaptive way, with regard to the dynamic network conditions. Each node is enabled to record a history of environmental information regarding the contacts and forwarding opportunities, and use this history to predict future opportunities and optimize the sizes accordingly. Lastly, the receiver (destination) node is pushed into action by reacting to missing data parts in a composite ``backward'' loss concealment mechanism. So, the receiver asks first for the missing data from other nodes in the network in the form of request-response. Then, since the transmission is concerned with video content, video frame loss error concealment techniques are also exploited at the receiver side. Consequently, we propose to combine the two techniques in the loss concealment mechanism, which is enabled then to react to missing data parts. To study the feasibility and the applicability of the proposed solutions, simulation-driven experiments are performed, and statistical results are collected and analyzed. Consequently, we have got promising results that show the applicability of video dissemination in opportunistic delay tolerant networks, and open the door for a range of possible future works. KW - Opportunistisches Netzwerk KW - Multimedia KW - Videoübertragung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4438 ER - TY - THES A1 - Kinseher, Josef T1 - New Methods for Improving Embedded Memory Manufacturing Tests N2 - Due to the need for fast and energy-efficient accesses to growing amounts of data, the share and number of embedded memories inside modern microchips has been continuously increasing within the last years. Since embedded memories have the highest integration density of a fabrication technology they pose special test challenges due to complex manufacturing defects as well as strong transistor aging phenomena. This necessitates efficient methods for detecting more subtle defects while keeping test costs low. This work presents novel methods and techniques for improving the efficiency of embedded memory manufacturing tests. The proposed methods are demonstrated in an industrial setting based on production-proven transistor, memory as well as chip models and their benefits over the current state-of-the art is worked out. KW - Speicher KW - Chip KW - Eingebettetes System Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6017 ER - TY - THES A1 - Gütschow, Silja T1 - Roulets - Eine Integraltransformation zur Bestimmung gerichteter Singularitäten N2 - In dieser Arbeit wird eine neue Integraltransformation, die Roulettransformation, eingeführt. Diese arbeitet mit anisotropen Skalierungen und Rotationen. Es wird gezeigt, dass die Roulettransformation allgemeine gerichtete Singularitäten im Sinne von temperierten Distributionen auflöst. Die Abklingraten an Punkt- sowie Liniensingularitäten werden explizit angegeben. KW - Integraltransformation KW - Singularität Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5929 ER - TY - THES A1 - Tomashevich, Victor T1 - Fault Tolerance Aspects of Virtual Massive MIMO Systems N2 - Employment of a very large number of antennas is seen as the key technology to provide future users with very high data rates. At the same time, the implementation complexity will rise due to large memories required and sophisticated signal processing algorithms employed. Continuous technology downscaling allows implementation of such complex digital designs. At the same time, its inherent variability and vulnerability to physical disturbances violate the assumption of perfectly reliable hardware operation. This work considers Unique Word OFDM which represents the alternative to the standard Cyclic Prefix OFDM providing superior detection quality. The generalization of Unique Word OFDM to a MIMO system is performed which allows interpretation as a virtual massive MIMO system with only few physical antennas. Detection methods for the introduced generalization are discussed and their performance is quantified. Because of the large memory size required, linear detection represents the cost and performance effective solution. The possible memory errors due to radiation effects or voltage scaling are addressed and the nonlinear MMSE detection algorithm is proposed. This algorithm keeps track of the memory errors and is able to significantly mitigate their effect on the quality of the estimated data. Apart of memory issues, reliability of the actual computational hardware which constitutes the receiver is of concern in this work. An own implementation of the MMSE Sorted Givens Rotations is subjected to transient fault injection. The impact of faults in various parts of the implemented circuit on the detection performance is quantified. Most vulnerable components of the implemented circuit in terms of reliability are identified. Security is another major address of this work, since most current implementations include cryptographic devices. Fault-based attacks on such systems are known to be able to extract the secret key in feasible time. The remaining part of this work addresses such fault injection-based malicious attacks. Countermeasures based on a combination of information and hardware redundancy are considered. Recently introduced robust codes target such attacks by providing guaranteed detection capability. The performance of these codes is assessed by application to actual cryptographic and general purpose circuits. The work introduces metrics that help to identify fault locations in the circuit which could escape detection with high probability. These locations are targeted by transistor resizing that renders fault injection unfeasible. KW - MIMO Systems KW - MIMO KW - Fehlertoleranz Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4047 ER -