000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Year of publication
Document Type
- Doctoral Thesis (24)
- Article (8)
- Conference Proceeding (1)
- Report (1)
Has Fulltext
- yes (34)
Is part of the Bibliography
- no (34)
Keywords
- Privatsphäre (2)
- Abfragesprache (1)
- Active Learning (1)
- Allianz Aktiengesellschaft (1)
- Alltagsdigitalisierung (1)
- Anfragerelaxation (1)
- Armut (1)
- Artificial Intelligence (1)
- Avionik (1)
- Bildverarbeitung (1)
- Browser Security (1)
- Chip (1)
- Complex Sentences (1)
- Complexification (1)
- Computed Tomography (1)
- Computersicherheit (1)
- Critical Infrastructures (1)
- Datamining (1)
- Datenbanksprache (1)
- Datenbanksystem (1)
- Datensicherung (1)
- Datenverarbeitung (1)
- Discrete space search (1)
- Eingebettetes System (1)
- Entwicklungshilfe (1)
- Ethnografie (1)
- Functional relationships (1)
- GDPR (1)
- Game Theory (1)
- Gebäudeleittechnik (1)
- Graph-based database models (1)
- Graphenzeichnen (1)
- Hierarchische Klassi (1)
- IFC Store (1)
- IIoT (1)
- IT-Sicherheit (1)
- Image processing (1)
- Indistinguishability (1)
- Indonesia (1)
- Indonesien (1)
- Industrial Networks (1)
- Industry Foundation Classes (1)
- Intelligentes Stromnetz (1)
- Interactive (1)
- Interdisziplinarität (1)
- Internet der Dinge (1)
- Internet of Things (1)
- Internet of Things (IoT) (1)
- Klassifikation (1)
- Klassifikation / Bibliothek (1)
- Klassifikationstheorie (1)
- Knowledge Graph (1)
- LoRaWAN (1)
- Machine learning (1)
- Mikrofinanzierung (1)
- Mikroversicherung (1)
- NFV (1)
- NLP, Argument Mining, Argument Quality Assessment, Financial Argumentation, Earnings Conference Calls (1)
- Network Function Virtualization (1)
- Network Virtualization (1)
- O-Minimality (1)
- O-Minimalität (1)
- Open Information Extraction (1)
- Petri Nets, Blockchain, Security (1)
- Preparation Theorems (1)
- Privacy (1)
- Programmverifikation (1)
- Property graph (1)
- Relational database model (1)
- Restricted Log-Exp-Analytic Functions (1)
- Risikomanagement (1)
- Risk Attitude (1)
- Rule Mining (1)
- SFC (1)
- Search Engine (1)
- Security Management (1)
- Segmentation (1)
- Semantic Interpretability (1)
- Semantic Representation (1)
- Sicherheit (1)
- Smart buildings (1)
- Software Engineering (1)
- Software performance (1)
- Softwarekonfiguration (1)
- Source code generation (1)
- Speicher <Informatik> (1)
- Spieltheorie (1)
- Spreading-Activation (1)
- Structural relationships (1)
- Suchmaschine (1)
- Syntactic Simplification (1)
- Systematische Aufstellung (1)
- Tamm's Theorem (1)
- Text Simplification (1)
- Textual Entailment (1)
- Time Sensitive Networking (1)
- Typologie (1)
- Uncertainty (1)
- Unlinkability (1)
- VNE (1)
- Versicherung (1)
- Virtual Network Embedding (1)
- Web Security (1)
- WebRTC (1)
- Wirkungsanalyse (1)
- Wissenschaftsklassifikation (1)
- Wissenschaftsordnung (1)
- Wissensordnung (1)
- Workflow-Aware Access Control for the Internet of Things (1)
- accurate estimation (1)
- adaptive configuration (1)
- battery modeling (1)
- battery state variables (1)
- classification (1)
- cost minimization (1)
- credit life (1)
- data-driven approach (1)
- day-ahead planning (1)
- deep Q-Network (1)
- deep q-learning (1)
- deep reinforcement learning (1)
- dueling deep q-networks (1)
- energy storage (1)
- hierarchische Strukturen (1)
- impact (1)
- knowledge (1)
- legal factors (1)
- machine learning (1)
- microinsurance (1)
- parameter optimization (1)
- postMessage (1)
- preference language (1)
- privacy (1)
- real dataset (1)
- redox-flow battery (1)
- reinforcement learning (1)
- räumlich (1)
- sciences (1)
- smart EV charging (1)
- software verification, model checking, counterexample guided abstraction refinement, CEGAR, interpolation, sliced prefixes, refinement selection, value analysis, predicate analysis, CPAchecker, automatic, automated (1)
- thematisch (1)
- typology (1)
- usability (1)
- voltage prediction (1)
- waiting time (1)
IoT is defined as a paradigm where "things" have sensing, actuating, communicating, and self-configuring abilities, and are connected to each other and to the Internet. Recent advancements in the manufacturing industry have helped to produce embedded devices with various sensors and actuators in mass numbers at a reduced cost. As part of the IoT revolution, everyday devices such as television, refrigerator, cars, even industrial machines are now connected IoT devices. Recent studies have predicted that by 2025 there will be over 75 billion of such IoT devices connected to the Internet.
The providers of IoT based services want to integrate their services to satisfy customer requirements. For example, in the mobility scenario, different mobility solution providers want to offer a multi-modal ticket to their customers jointly. In such a distributed and loosely coupled environment, each owner and stakeholder wants to secure his/her own integrity, confidentiality, and functionality goals. This means that distributed rules and conditions defined by the individual owners must be enforced on the participating entities (e.g., customers or partners using their services). The owners and stakeholders may not necessarily trust each other's actions. Therefore, a mechanism is required that guarantees the rules and conditions specified by the different owners.
Attacks on IoT devices and similar computing systems are increasing and getting more advanced. IoT devices are often constrained, i.e., they have limited processing power, memory, and energy. Security mechanisms designed for traditional computing systems, e.g., computers, servers, or mobile computing devices such as smartphones, may not fit in those constrained IoT devices. Weak security mechanisms and unenforced security measures were one of the main reasons for recent successful attacks on IoT devices and services. As IoT is now used in many sensitive places, including critical infrastructures, securing them becomes more critical than ever. This thesis focuses on developing mechanisms that secure IoT devices and services and enforcing the rules and conditions specified by the owners on entities that want to access owners' resources.
In classical computer systems, security automata are used for specifying security policies and monitoring mechanisms are used for enforcing such policies. For instance, a reference monitor observes and stops the execution when the security policies are about to be violated, thus, the security policies are enforced. To restrict the adversary from using protected IoT devices or services for malicious purposes, it is required to ensure that a workflow must be followed to access the protected resource. In distributed IoT systems where the policies are governed by different owners, each owner would like to specify their rules and conditions in their workflows. The workflows contain tasks that must be performed in a particular order. The goal of this thesis is to develop mechanisms to specify and enforce these workflows in the distributed IoT environment.
This thesis introduces a distributed WFAC framework that restricts the entities to do only what they are allowed to do in a collaborative environment. To gain access to a service protected by the WFAC framework, every workflow participant must prove that he/she is in a particular state of an authorized workflow. Authorized means two things: (a) the owner has authorized the workflow to be executed; (b) the workflow participant is authorized to execute it. This restricts the adversary's access to the devices and its services. The security policies defined by different owners are modeled as workflows and specified using Petri Nets. The policies are then enforced with the help of the WFAC framework which supports error-handling, accountability, integration of practitioner-friendly tools, and interoperability with existing security mechanisms such as OAuth. Thus, the WFAC guarantees the integrity of workflows in a distributed environment.
Analysing security assumptions taken for the WebRTC and postMessage APIs led us to find a novel attack abusing the browsers' persistent storage capabilities. The presented attack can be executed without the website's visitor knowledge, and it requires neither browser vulnerabilities nor additional software on the browser's side. To exemplify this, we study how can an attacker use browsers to create a network for persistent storage and distribution of arbitrary data.
In our proof of concept, the total storage of the network, and therefore the space used within each browser, grows linearly with the number of origins delivering the malicious JavaScript code. Further, data transfers between browsers are not restricted by the Same Origin Policy, which allows for a unified cross-origin browser network, regardless of the origin from which the script executing the functionality is loaded from.
In the course of our work, we assess the feasibility of a real-life deployment of the network by running experiments using Linux containers and browser automation tools. Moreover, we show how security mechanisms against third-party tracking, cross-site scripting and click-jacking can diminish the attack's impact, or even prevent it.
Due to the need for fast and energy-efficient accesses to growing amounts of data, the share and number of embedded memories inside modern microchips has been continuously increasing within the last years. Since embedded memories have the highest integration density of a fabrication technology they pose special test challenges due to complex manufacturing defects as well as strong transistor aging phenomena. This necessitates efficient methods for detecting more subtle defects while keeping test costs low. This work presents novel methods and techniques for improving the efficiency of embedded memory manufacturing tests. The proposed methods are demonstrated in an industrial setting based on production-proven transistor, memory as well as chip models and their benefits over the current state-of-the art is worked out.
Performance optimization of stencil codes requires data locality improvements. The polyhedron model for loop transformation is well suited for such optimizations with established techniques, such as the PLuTo algorithm and diamond tiling. However, in the domain of our project ExaStencils, stencil codes, it fails to yield optimal results. As an alternative, we propose a new, optimized, multi-dimensional polyhedral search space exploration and demonstrate its effectiveness: we obtain better results than existing approaches in several cases. We also propose how to specialize the search for the domain of stencil codes, which dramatically reduces the exploration effort without significantly impairing performance.
Smart Grids integrate currently isolated power and communications networks, while introducing several new technologies on the hardware and software sides. One of the most important ingredients is the potential for demand-response programs, which offer the possibility of sending instructions to consumers to adapt their power consumption over a certain period of time. However, high-frequency data collection exposes consumers’ usage behaviors, leading to security and privacy challenges for Smart Grids.
In this thesis, three cryptographic schemes are constructed for different demand-response programs. In the mandatory incentive-based demand-response program, privacy preservation depends on the power consumption of consumers. An anonymous authentication scheme is constructed for overload auditing and privacy preservation. Consumers’ identities are anonymous during normal operation. The operation center defines an acceptable consumption threshold at times of power shortage. Consumers must follow the instruction and curtail their power consumption to meet the threshold. If they do so, the consumers keep their anonymity, while disobedient consumers, whose power consumption exceeds the threshold, can be identified. Security analysis demonstrates that the constructed anonymous authentication scheme is secure in a random oracle model. In the voluntary incentivebased demand-response program, consumers are categorized as either obedient or disobedient consumers according to their consumption curtailment. Consumers utilize a homomorphic encryption algorithm to encrypt their usage and report the ciphertexts to the operation center periodically. At a time of grid instability, the obedient consumers reduce their consumption and prove their curtailment by using a range proof. Both the usage reports and the proofs from obedient consumers concerning their consumption are reported without leaking private information. In order to achieve the real-time requirement, a security model is proposed and a batch verification algorithm is constructed, which is proved to be secure in the defined oracle model. Apart from reward and penalty detection in demand-response programs, theft detection is also an important requirement in Smart Grids. In order to achieve theft detection, this thesis employs the dynamic k-times anonymous authentication and blind signatures to create an efficient theft detection mechanism in the prepaid card system, where consumers pay for their consumption in advance and obtain credentials. A consumer sends the credentials anonymously and obtains corresponding credentials during times of consumption. If a thief tries to send reused credentials to steal electricity, his anonymity will be revoked. Finally, this thesis proves that the proposed mechanism finds the real identities of power thieves, without sacrificing the privacy of honest consumers under the random oracle model.
The Internet of Things (IoT) is a network of computational services, devices, and people, which share information with each other. In IoT, inter-system communication is possible and human interaction is not required. IoT devices are penetrating the home and office building environments. According to current estimates, about 35 billion IoT devices will be connected by the year 20212. In the IoT business model, value comes from integrating devices into applications, e.g., home and office automation. In general, an IoT application associates different information sources with actions which can modify the environment, e.g., change the room’s temperature, inform a person, e.g., send an e-mail, or activate other services, e.g., buy milk on-line.
In this thesis, we focus on the commissioning and verification processes of IoT devices used in building automation applications. Within a building’s lifespan, new devices are added, interior spaces are refurbished, and faulty devices are replaced. All of these changes are currently made manually. Furthermore, consider that a context-aware Building Management System (BMS) is an IoT application, which measures direct-context from the building’s sensors to characterize environmental conditions, user locations, and state. Additionally, a BMS combines sensor information to derive inferred-context, such as user activity. Similar to IoT devices, inferred-context instances have to be created manually. As the number of devices and inferred-context instances increases, keeping track of all associations becomes a time-consuming and error-prone task.
The hypothesis of the thesis is that users who interact with the building create use-patterns in the data, which describe functional relations between devices and inferred-context instances, e.g., which desk-movement sensor is used to infer desk-presence and controls which overhead light; additionally, use-patterns can also provide structural relations, e.g., the relative position of spatial sensors. To test the hypothesis, this thesis presents an extension to the new IoT class rule programming paradigm, which simplifies rule creation based on classes. The proposed extension uses a semantic compiler to simplify the device and inferred-context associations. Using direct-context information and template classes, the compiler creates all possible inferredcontext instances. Buildings using context-aware BMSs will have a dynamic response to user behaviour, e.g., required illumination for computer-work is provided by adjusting blinds or increasing the dim setting of overhead ceiling lamps. We propose a rule mining framework to extract use-patterns and find the functional and structural relationships between devices. The rule mining framework uses three stages: (1) event extraction, (2) rule mining, (3) structure creation. The event extraction combines the building’s data into a time-series of device events. Then, in the rule mining stage, rules are mined from the time series, where we use the established algorithm temporal interval tree association rule learner. Additionally, we proposed a rule extraction algorithm for spatial sensor’s data. The algorithm is based on statistical analysis of user transition times between adjacent sensors. We also introduce a new rule extraction algorithm based on increasing belief. In the last stage, structure creation uses the extracted rules to produce device association groups, hierarchical representation of the building, or the relative location of spatial sensors. The proposed algorithms were tested using a year-long installation in a living-lab consisting of a four-person office, a 12-person open office, and a meeting room. For the spatial sensors, four locations within public buildings were used: a meeting room, a hallway, T-crossing, and a foyer. The recording times range from two weeks to two months depending on scenario complexity.
We found that user-generated patterns appear in building data. The rule mining framework produced structures that represent functional and spatial relationships of building’s devices and provide sufficient information to automate maintenance tasks, e.g., automatic device naming. Furthermore, we found that environmental changes are also a source of device data patterns, which provide additional associations. For example, using the framework we found the façade group for exterior light sensors. The façade group can be used to automatically find an alternative signal source to replace broken outdoor light sensors. Finally, the rule mining framework successfully retrieved the relative location of spatial sensors in all locations but the foyer.
Data management is a cornerstone for any kind of information system - including the aerospace and aviation sector. In contrast to conventional domains, software development in the avionics domain must adhere to a legally binding certification process, called qualification. The success of the process depends on compliance with international standards, such as DO-178: Software Considerations in Airborne Systems and Equipment Certification. From a software developer's perspective, challenges arise in terms of methods and tools. Techniques that have a potential impact on the deterministic and predictable execution of avionics software are prohibited.
The objective of this thesis' research is to develop a scalable method to realize data-management for multi-variant avionics software under the restrictions and constraints of the domain. Since avionics software faces very long-term life-cycles (up to 75 years), a particular focus is being placed on maintenance and evolution. Based on the insights gained in a semi-structured interview at Airbus Helicopters, industrial established approaches to implement qualified avionics software are assessed at first and compared with respect to strengths and weaknesses for data-management afterwards. As a result, a novel development approach is proposed, combining model-based techniques and product-line technology to derive the source code of highly specific data-management variants, as well as the majority of assets required for the qualification process, from a declarative system specification.
In order to demonstrate the practicability of the approach in industry, a framework is presented that is deployed and applied at Airbus Helicopters to generate qualifiable data-management components for the variants of the NH90 helicopter. The maintainability is shown by means of a domain-specific optimization, in which the model-based and generative approach is used to establish safe memory overlays at compile-time. Key findings reveal a substantially reduced memory footprint (29,1% in case of a real-world scenario), as well as an significantly facilitated implementation process, which would not be accomplishable using conventional methods for software development in the avionics domain.
Für Monumentalbauten als Teil unseres Kulturgutes im Speziellen als auch für Gebäude im Allgemeinen, wurden im Rahmen des MonArch- rojektes verschiedene Methoden zur digitalen Speicherung von Informationen über Monumentalbauten erforscht. Das daraus entstandene MonArch-System ist für die Dokumentation von Monumentalbauten verwendbar und speichert das digitale Modell des Bauwerks in einer relationalen Datenbank. Das digitale Modell des Bauwerks entsteht durch eine Segmentierung in Gebäudeteile, die dann in einer Strukturhierarchie zusammengefasst werden können. Als Strukturhierarchie versteht man in diesem Zusammenhang eine Hierarchie von Gebäudeteilen, die in einer Teil-von-Beziehung stehen. Die Strukturhierarchie erlaubt es Informationen z.B. Dokumente mit einem räumlichen Bezug auszuzeichnen. Zusätzlich wird eine Themenhierarchie unterstützt, die es erlaubt Informationen thematisch mit Begriffen zu beschreiben.
Betrachtet man räumliche und thematische Anfragen in vernetzten MonArch-Systemen, in denen sich mehrere Gebäudearchive zusammenschließen, ist diese starke Bindung der Information an die einzigartige Struktur jedes Gebäudes ein Hindernis für ein einfaches Verfahren zur räumlichen Suche. Da sich jedes Gebäude in seinem speziellen strukturellen und räumlichen Aufbau unterscheidet, liefert eine räumliche Anfrage, die speziell auf diese Eigenheiten eines Gebäudes ausgerichtet ist, für andere Gebäude keine Suchergebnisse. Für thematische Anfragen stellen nicht kompatible Themenhierarchien ein Hindernis dar, die eine übergreifende thematische Anfrage verhindern. Die größte Herausforderung ist es, Struktur- und Themenhierarchien aufeinander abzubilden.
Zur Lösung des geschilderten Problems wird in vernetzten Informationssystemen auf eine geeignete Transformation der ursprünglichen Anfrage zurückgegriffen, um den Anfragefokus zu erweitern (Relaxation) oder eine Anpassung an die Gegebenheiten des entfernten Informationssystems zu erreichen (Transformation). Das Anfragetransformations- und -relaxationsverfahren, das in dieser Arbeit vorgestellt wird, nutzt eine Generalisierungsbeziehung aus, um ausgehend von einer Anfrage an eine spezielle Struktur- und Themenhierarchie eine automatische Transformation der Anfrage durchzuführen. Bei Themenhierarchien sind gemeinsame Oberthemen ein Ansatzpunkt. Bei Strukturhierarchien können Typinformationen zu Gebäudeteilen die Generalisierungsbeziehung darstellen. Die transformierte und dadurch relaxierte Anfrage kann dann an ein Netzwerk von MonArch-Systemen gestellt werden, ohne dass eine manuelle Auswahl der Gebäudeteile in anderen Strukturhierarchien oder eine angepasste Themenauswahl erfolgen muss. Dazu muss die Strukturhierarchie der anderen Gebäude im Netzwerk von MonArch-Systemen nicht bekannt sein. Im Rahmen der vorliegenden Arbeit werden verschiedene Relaxationsverfahren, z.B. ein angepasstes Spreading-Activation-Verfahren, zur automatischen Anfragetransformation von räumlichen und thematischen Anfragen vorgestellt, mit dem Ziel eine vollständige Abbildung zwischen den Strukturhierarchien von Gebäuden und Themenhierarchien zu vermeiden. Erreicht wird das Ziel durch eine Erweiterung des MonArch-Datenmodells und eine Verallgemeinerung der MonArch-Anfragen, die eine Anfragetransformation zum Anfragezeitpunkt erlauben.
Optical Graph Recognition
(2017)
Graphs are an important model for the representation of structural information between objects. One identifies objects and nodes as well as a binary relation between objects and edges. Graphs have many uses, e. g., in social sciences, life sciences and engineering. There are two primary representations: abstract and visual. The abstract representation is well suited for processing graphs by computers and is given by an adjacency list, an adjacency matrix or any abstract data structure. A visual representation is used by human users who prefer a picture. Common terms are diagram, scheme, plan, or network. The objective of Graph Drawing is to transform a graph into a visual representation called the drawing of a graph. The goal is a “nice” drawing.
In this thesis we introduce Optical Graph Recognition. Optical Graph Recognition (OGR) reverses Graph Drawing and transforms a digital image of a graph into an abstract representation. Our approach consists of four phases: Preprocessing where we determine which pixels of an image are part of the graph, Segmentation where we recognize the nodes, Topology Recognition where we detect the edges and Postprocessing where we enrich the recognized graph with additional information. We apply established digital image processing methods and make use of the special property that the image contains nodes that are connected by edges. We have focused on developing algorithms that need as little parameters as possible or to automatically calibrate the parameters. Most false recognition results are caused by crossing edges as this makes tracing the edges difficult and can lead to other recognition errors.
We have evaluated hand-drawn and computer-drawn graphs. Our algorithms have a very high recognition rate for computer-drawn graphs, e. g., from a set of 100000 computer-drawn graphs over 90% were correctly recognized. Most false recognition results where observed for hand-drawn graphs as they can include drawing errors and inaccuracies. For universal usability we have implemented a prototype called OGRup for mobile devices like smartphones or tablet computers. With our software it is possible to directly take a picture of a graph via a built in camera, recognize the graph, and then use the result for further processing. Furthermore, in order to gain more insight into the way a person draws a graph by hand, we have conducted a field study.
This thesis presents various techniques that aim at enabling more effective and more
efficient approaches for automatic software verification.
After a brief motivation why automatic software verification is getting ever more
relevant, we continue with detailing the formalism used in this thesis and on the
concepts it is built on.
We then describe the design and implementation of the value analysis, an analysis
for automatic software verification that tracks state information concretely. From
a thorough evaluation based on well over 4 000 verification tasks from the latest
edition of the International Competition on Software Verification (SV-COMP), we
learn that this plain value analysis leads to an efficient verification process for many
verification tasks, but at the same time, fails to solve other verification tasks due
to state-space explosion. From this insight we infer that some form of abstraction
technique must be added to the value analysis in order to also allow the successful
verification of large and complex verification tasks.
As a solution, we propose to incorporate counterexample-guided abstraction refinement (CEGAR) and interpolation into the value domain. To this end, we design
a novel interpolation procedure, that extracts from infeasible counterexamples interpolants for the value domain, allowing to form a precision strong enough to exclude
these infeasible counterexamples, and to make progress in the CEGAR loop. We
then describe several optimizations and extensions to these concepts, such that the
value analysis with CEGAR becomes competitive for automatic software verification.
As the next step, we combine the value analysis with CEGAR with a predicate
analysis, to obtain a more precise and efficient composite analysis based on CEGAR.
This composite analysis is indeed on a par with the world’s leading software verification tools, as witnessed by the results of SV-COMP’13 where this approach achieved
the 2 nd place in the overall ranking.
After having available competitive CEGAR-based analyses for the value domain,
the predicate domain, and the combination thereof, we then turn our attention to
techniques that have the goal to make all these CEGAR-based approaches more
successful. Our first novel idea in this regard is based on the concept of infeasible
sliced prefixes, which allow the computation of different precisions from a single
infeasible counterexample. This adds choice to the CEGAR loop, while without this
enhancement, no choice for a specific precision, i. e., a specific refinement, is possible.
In our evaluation we show, for both the value analysis and the predicate analysis,
that choosing different infeasible sliced prefixes during the refinement step leads to
major differences in verification effectiveness and verification efficiency.
Extending on the concept of infeasible sliced prefixes, we define several heuristics
in order to precisely select a single refinement from a set of possible refinements. We
make this new concept, which we refer to as guided refinement selection, available
to both the value and predicate analysis, and in a large-scale evaluation we try to
answer the question which selection technique leads to well suited abstractions and
thus, to a more effective verification process. Additionally, we present the idea of
inter-analysis refinement selection, where the refinement component of a composite
analysis may decide which of its component analyses is best to be refined, and in yet
another evaluation we highlight the positive effects of this technique.
Finally, we present the results of SV-COMP’16, where the verifier we contributed
and which is based on the concepts and ideas presented in this thesis achieved the
1 st place in the category DeviceDriversLinux64.