005 Computerprogrammierung, Programme, Daten
Refine
Document Type
- Doctoral Thesis (12)
- Report (3)
- Article (2)
- Habilitation (1)
- Working Paper (1)
Has Fulltext
- yes (19)
Keywords
- - (2)
- Betriebssystem (2)
- Computersimulation (2)
- code generation (2)
- inner source (2)
- software engineering (2)
- AMT (1)
- ARM (1)
- Abstrakter Syntaxbaum (1)
- Algorithmisierung (1)
The inflationary use of currently prominent terms, such as data centration or the digital age, without the effort of naming and explaining how the phenomena they refer to are to be understood and by which characteristics they are characterized, does nothing more than obscure meaning without any chance of gaining insight or well-founded opinion. Algorithms, data, digitization, and are three of the candidates that are frequently used but only rarely explained in a well-founded and differentiated manner. This article comes into this desideratum. It deals with these 'three unknowns' and attempts to grasp and substantiate the terms and the phenomena they refer to and to formulate what exactly is 'new' and what constitutes the quality of difference to previous states (age without digitization.
In this thesis, we investigate different possibilities to protect the Android ecosystem better. We focus on protection mechanisms for application developers, and present modern attacks against sandbox-protected applications and the developer’s intellectual property, ultimately providing enhanced approaches for defense against these attacks. Our defensive approaches range from runtime-shielding measures to analysis-impeding obfuscation mechanisms.
First, we take a closer look at communication possibilities of sandboxed applications on Android, namely the UI layer and Android’s inter-process communication. We introduce attacks on applications working through the actors on Android’s UI, starting with overlay windows, accessibility services, input editors, and screen captures. Android’s inter-process communication is the second attack avenue we pursue. It is the primary means of communication for apps to interact with each other despite being sandboxed by the Android system. We show through assessments of the Google Play Store and third-party app stores that attacks on these mechanisms pose a blind-spot in current attack models considered by developers. To provide relief we introduce new protection mechanisms that developers can implement and enhance testing methodologies to consider these attacks in the future.
Second, we direct the reader’s attention towards attacks on the developer’s intellectual property. Due to Android’s open-source nature and openly communicated standards, a trend of repackaging popular applications with malicious enhancements and republishing the malicious app has rooted itself in the malware community. To counteract this development, we present an enhanced centroid-based approach at clone detection and improved analysis-impeding obfuscation mechanisms that build on virtualization-based obfuscation. Our obfuscation approach works on Android’s current runtime environment, as well as the previously employed ‘Dalvik virtual machine’, and can be used to obfuscate critical portions of an application’s functionality against prying eyes. To make valid assumptions about the strength of virtualization-based obfuscation, we conduct a de-obfuscation study on the more mature x86/x64 platform, developing a reverse engineering approach for virtualization-obfuscated binaries.
We analyzed several hundred thousand Android applications during our research with automated approaches and several thousand apps with manual analysis, always opting for a responsible disclosure process of found vulnerabilities by providing developers with at least three months’ due notice before attempting a publication. The tools presented in this thesis are open-sourced under the MIT license, to help in the inclusion of development projects and their extension or further development. With the insights gained through the research for this thesis, we hope to provide developers with the tools and testing approaches they need to make the Android ecosystem more
secure and safe.
Inner source (IS) is the use of open source software development practices and the establishment of open source-like communities within an organization. The organization may still develop proprietary software but internally opens up its development. IS promises to resolve problems of traditional software development by easing software reuse and enabling parties within an organization to collaborate across organizational boundaries.
However, it is unclear what elements constitute IS (problem I) and how to measure the presence and magnitude of IS collaboration (problem II). The large majority of research articles on IS to date are limited to qualitative results regarding IS. There are yet no quantitative studies on IS collaboration exploring how much IS collaboration takes place or how IS practices affect it (problem III).
We followed a three-phase research approach to address these problems. First, we performed an extensive literature survey and analyzed 43 IS publications. We found that four key elements constitute IS (shared cultural values, open development environment, communities around software, IS-specific scenarios) but that IS programs and projects differ on at least five dimensions (addressing problem I).
Second, we developed the patch-flow method (and a software tool implementing it) for measuring IS collaboration. Patch-flow is the flow of code contributions across organizational boundaries ("silos") such as organizational unit or cost center boundaries. We evaluated the method using case study research with a non-trivial industry organization and found it to be viable and useful to practitioners (addressing problem II).
Third, we performed a multiple-case case study with three large software organizations running a total of five IS program. We identified the used IS practices and the resulting patch-flow. We found patch-flow to exist in all organizations but that only fraction of all code contributions to IS projects constitute patch-flow. We observed that the number of IS practices implemented correlates with the distance of parties involved in collaboration. This indicates that IS is particularly suited to enable collaboration between parties of high distance in an organization (addressing problem III).
This thesis delivers a holistic definition of IS and the first classification framework for IS programs and projects. Researchers can use such a framework to reason about generalizability of their results more precisely. The patch-flow measurement method is the first of its kind to measure and quantify IS collaboration and can serve as a base for further quantitative analyses of IS collaboration. The exploration of the patch-flow in the three industry cases can serve as example and benchmark for practitioners.
The Internet of Things (IoT) brings comfort into the life of users. It is convenient to control the lights at home with an app without leaving the couch or open the front door with a remote control. This comfort, however, comes with security risks as the wireless communication between components often relies on proprietary protocols. Such protocols are designed under size and energy constraints whereby security is often only a secondary factor. Moreover, even when a default protocol such as IEEE 802.11 WLAN with enabled encryption is used, mobile devices such as smartphones can be located threatening the location privacy of users.
This thesis is divided into two main parts. In the first part, we demonstrate how to passively locate a smartphone indoors using IEEE 802.11 WLAN and contribute a geolocation system with a mean accuracy of 0.58m. Subsequently, we analyze how a company can incentivize users with different levels of privacy-awareness to connect to a provided WLAN and give up their location privacy in exchange for certain benefits such as shopping discounts. We model this situation as a Bayesian Stackelberg game to find the company's best strategy.
In the second part, we showcase the challenges that arise for security researchers when investigating proprietary wireless protocols. Software Defined Radios (SDRs) propose a generic way to analyze such protocols operating on frequencies like 433.92 MHz or 868.3 MHz where no default hardware such as a WLAN stick is available. SDRs, however, deliver raw signals that have to be demodulated and decoded before researchers can reverse-engineer the protocol format.
Our main contribution to this process is an open source software called Universal Radio Hacker (URH) which is, to the best of our knowledge, the first complete suite for wireless protocol investigation with SDRs. URH splits down the protocol investigation process into the phases Interpretation, Analysis, Generation and Simulation.
The goal of Interpretation phase is identifying the transmitted bits and bytes by demodulating the signal. Apart from letting users manually adjust demodulation parameters, we contribute a set of algorithms to automatically find these parameters and integrate them into URH.
In Analysis phase, the protocol format is reverse-engineered from the demodulated bits. This is a time-consuming manual process that slows down a security analysis. To address this problem, we design and implement a modular system that automatically finds protocol fields such as addresses and checksums. In combination with the automatic modulation parameter detection this speeds up the security analysis of unknown wireless protocols.
URH enables researchers to perform attacks on stateless and stateful protocols in the Generation and Simulation phase, respectively. In Generation, users can apply a fuzzing to arbitrary data ranges while the Simulation component of URH models protocol state machines and dynamically reacts to incoming messages from investigated devices. In both phases, the software automatically applies modulation and encoding to the bits that should be sent. We demonstrate three attacks on IoT devices that were found and executed with URH. The most complex attack involves opening an AES protected wireless door lock in real-time.
The steadily advancing trend towards multi- and manycore computing architectures bears enormous challenges for developers of application software. To be able to make efficient use of the raw parallelism provided by the hardware, programs must explicitly cater for that fact. The classic programming model of a multithreaded application process, which consists of a number of control flows (threads) managed and scheduled by the operating-system kernel within a shared address space, is being increasingly stretched to its limits: on the one hand, creating threads and switching between them is not sufficiently lightweight; on the other hand, structuring a parallel application around threads is often cumbersome and puts needless obstacles in the programmer’s way.
A suitable alternative to multithreaded programming is the use of a so-called concurrency platform that supports developers in articulating applications as a conglomeration of fine-grained concurrent activities. Concurrency platforms come with a runtime system that is responsible for dispatching the lightweight work packages to the available computing resources. Such runtime systems generally build upon the abstractions provided by an underlying commodity operating system such as Linux – that is, upon threads as abstractions of processor cores. This construction results in a number of disadvantages: for instance, the operating system’s scheduler acts without consulting the runtime system, thus making decisions that are potentially unfavourable from the application’s point of view; the coexistence of multiple parallel application processes causes problematic reciprocal interference; blocking system calls cause a temporary loss of parallelism.
This thesis presents AtroPOS, the design of an atrophied parallel operating system that is specially geared towards supporting concurrency platforms on manycore systems. AtroPOS is a derivative of the OctoPOS operating system and has undergone comprehensive further development; it rests on the paradigm of invasive computing and adopts its fundamental concepts: resource-aware programming, exclusive allocation of processor cores to applications, tailoring and dynamic reconfigurability. The operating-system kernel provides a boiled-down set of essential low-level abstractions on top of which arbitrary runtime libraries can be placed. InvRT, the invasive runtime system that supports executing applications of invasive computing, was developed as a reference runtime library.
By default, AtroPOS makes the existing physical processor cores directly available to the application; their virtualisation is strictly optional and there is no notion of threads. The scheduling of user control flows is carried out purely on the user level by the runtime system without involving the operating-system kernel; this allows for the efficient handling even of very fine-grained concurrency within the application. System calls that may block within the kernel have asynchronous invocation semantics and return immediately upon blocking so that loss of parallelism during the waiting time is ruled out by design. Notification of completed system operations is carried out by means of a generic mechanism that passes user-defined data structures upward to the application and can be used by the runtime system to construct arbitrary synchronisation data structures such as futures. The same versatile mechanism is harnessed on tiled computing systems to allow parts of a distributed application to communicate with one another.
In addition, AtroPOS offers configurable vertical isolation: the strict separation of the operating-system kernel from the application can be enabled and disabled in a coarse- and fine-grained manner, and both statically and dynamically. With this, type-safe applications can issue system calls as ordinary function calls and thus lower their direct and indirect costs.
The aforementioned concepts were implemented in the AtroPOS kernel and the InvRT runtime system in the context of this thesis; they were evaluated with the aid of micro-benchmarks and various application suites. Moreover, the runtime library of the parallel programming language Cilk Plus – an extension of C/C++ – was ported to the AtroPOS interface in order to showcase the versatility of the approach.
Automatic Code Generation for Massively Parallel Applications in Computational Fluid Dynamics
(2019)
Solving partial differential equations (PDEs) is a fundamental challenge in many application domains in industry and academia alike. With increasingly large problems, efficient and highly scalable implementations become more and more crucial. Today, facing this challenge is more difficult than ever due to the increasingly heterogeneous hardware landscape. One promising approach is developing domain‐specific languages (DSLs) for a set of applications. Using code generation techniques then allows targeting a range of hardware platforms while concurrently applying domain‐specific optimizations in an automated fashion. The present work aims to further the state of the art in this field. As domain, we choose PDE solvers and, in particular, those from the group of geometric multigrid methods. To avoid having a focus too broad, we restrict ourselves to methods working on structured and patch‐structured grids.
We face the challenge of handling a domain as complex as ours, while providing different abstractions for diverse user groups, by splitting our external DSL ExaSlang into multiple layers, each specifying different aspects of the final application. Layer 1 is designed to resemble LaTeX and allows inputting continuous equations and functions. Their discretization is expressed on layer 2. It is complemented by algorithmic components which can be implemented in a Matlab‐like syntax on layer 3. All information provided to this point is summarized on layer 4, enriched with particulars about data structures and the employed parallelization. Additionally, we support automated progression between the different layers. All ExaSlang input is processed by our jointly developed Scala code generation framework to ultimately emit C++ code. We particularly focus on how to generate applications parallelized with, e.g., MPI and OpenMP that are able to run on workstations and large‐scale cluster alike.
We showcase the applicability of our approach by implementing simple test problems, like Poisson’s equation, as well as relevant applications from the field of computational fluid dynamics (CFD). In particular, we implement scalable solvers for the Stokes, Navier‐Stokes and shallow water equations (SWE) discretized using finite differences (FD) and finite volumes (FV). For the case of Navier‐Stokes, we also extend our implementation towards non‐uniform grids, thereby enabling static mesh refinement, and advanced effects such as the simulated fluid being non‐Newtonian and non‐isothermal.
This thesis presents a fully Asynchronous Many Task (AMT) runtime system extending the C++ programming language. Defining the distributed, asynchronous C++ Programming Model based on the C++ programming language is in focus. Besides, presenting performance portable Application Programming Interfaces (APIs) for shared
and distributed memory computing as well as accelerators.
With the rise of multi and many-core architectures, the C++ Language got amended with support for concurrency and parallelism. This work derives the methodology for massive parallelism from this industry standard and extending it with fine-grained user-level threads as well as distributed computing allowing large-scale Supercomputers to employ the same syntax and semantics for remote and local operations. By leveraging the nature of asynchronous task-based message passing using a one-sided Remote Procedure Call (RPC) mechanism, the overarching principle of work follows data manifests.
By leveraging the asynchronous, task-based nature of the future as a handle for asynchronously computed results, the term Futurization is coined, presenting a technique based on Continuation Passing Style (CPS) programming. This technique allows dealing with millions of concurrently running asynchronous tasks. By attaching continuations, dynamic dependency graphs are formed naturally from the regular control flow of the code. The effect is to parallelize through the runtime system by executing multiple continuations in parallel. In other words, the future based synchronization is expressing fine-grained constraints. Furthermore, Futurization blends in naturally with other well-known Techniques, such as data parallelism. Those other paradigms can be built using Futurization.
The technique as mentioned earlier provides the necessary foundation to address the needs for modern scientific applications targeting High Performance Computing (HPC) platforms. However, addressing the challenge of handling more and more complicated architectures, like different memory access latencies and accelerators is essential. This thesis attempts to solve this challenge by providing necessary means to define computational and memory targets by reusing already defined, or upcoming, concepts for C++. Consequently, providing means to link them together to intensify the principle of work follows data.
The feasibility of this approach will be demonstrated with a set of low-level micro-
benchmarks to show that the provided abstractions come with minimal overhead. Providing a 2D Stencil example that attests the programmability of Futurization, as well as the performance benefits, serves as the second benchmark. Lastly, showing the results of futurizing the astrophysics application OctoTiger, a 3D octree Adaptive Mesh Refinement (AMR) based binary star simulation, running at extreme scales concludes the experimental section.
Library updates, program errors, and maintenance tasks in general force developers to apply the same code change to different locations within their projects. If the locations are very different to each other, it is very time-consuming to identify all of them. Even with sufficient time, there is no guarantee that a manual search reveals all locations. If the change is critical, each missed location can lead to severe consequences. The manual application of the code change to each location can also get tedious. If the change is larger, developers have to execute several transformation steps for each code location. In the worst case, they forget a required step and thus add new errors to their projects.
To support developers in this task, this thesis presents the recommendation system ARES. It leads to more accurate recommendations compared to previous approaches. ARES achieves this by conserving variations in the training examples in more detail due to its pattern design and by an improved handling of code movements. With the tool C3, this thesis also presents an extension to ARES that allows the extraction of training examples from code repositories. In combination, both tools create a recommendation system that automatically learns code recommendation patterns from repositories.
ARES, C3, and similar tools rely on lists of edit operations to express code changes. However, creating compact (i.e., short) lists of edit operations from data in repositories is difficult. As previous approaches produce too long lists for ARES and C3, this thesis presents a novel tree differencing approach called MTDIFF. The evaluation shows that MTDIFF shortens the edit operation lists compared to other state-of-the-art approaches.
Multi-core processors are ubiquitous. Even embedded systems nowadays use processors with multiple cores. Such use cases often impose latency requirements because they interact with physical objects.
One consequence is a need for synchronisation algorithms that provide predictable latency, in addition to high throughput. A promising approach are asynchronous critical sections that avoid waiting even when a resource is occupied.
This paper introduces two algorithms that allow for both synchronous and asynchronous critical sections. Both algorithms base on a novel wait-free queue. The evaluation shows that both algorithms outperform pre-existing synchronisation algorithms for asynchronous requests, and perform similarly to traditional lock-based algorithms for synchronous requests. In summary, our synchronisation algorithms can improve throughput and predictability of parallel applications.
Electrical energy is the single most important operating resource to computer systems. Although the energy demand of computers is an invisible system property by itself, the impact of energy demand is omnipresent and obvious in manifold forms of appearance. Sudden system failures (i.e., system breakdowns) and recurrent standard system operations (i.e., system charging) serve as practical examples. Energy demand of hardware components is a physical property of integrated transistor circuits that build our computers today. However, dynamic energy demand at the hardware level is caused by system activities (i.e., processes) at the software level. The analysis and improvement of system software is in focus due to the causal relationship: system software yields challenges and opportunities in equal measure in order to reduce the energy demand of the system at the hardware level. In particular, fine-tuning of system components offers distinct measures to improve the energy efficiency of computer systems. Improvements concern the coherent design of application and system software under consideration of hardware aspects.
This thesis presents, implements, and evaluates unique concepts for proactive energy-aware computing on energy-efficient systems-on-a-chip. In particular, it contributes a development method for energy-aware programming that originates in static and dynamic program analysis to support programmers at the design of energy-aware programs. To assist programmers in reducing the energy demand of their programs, the thesis proposes a software-hardware tooling infrastructure that combines energy-aware programming techniques with automatised energy demand analysis at system level. To further reduce the energy demand of computer systems, the thesis implements a process executive at the operating-system level that exploits a priori information at run time to reduce the energy demand of processes. The corresponding cross-layer approach enables the transfer of programmers’ knowledge to the operating system to reduce the energy demand at run time.
The thesis is first to combine dynamic program analysis techniques and the automatic creation of program variants to support energy-aware programming at the operating-system level. The distinct combination of application knowledge to identify and set important adjusting screws for the energy efficient operation of a computing system bound to an operating system is claimed to be novel.
Many applications in scientific computing require solving one or more partial differential equations
(PDEs). For this task, solvers from the class of multigrid methods are known to be amongst the most efficient. An optimal implementation, however, is highly dependent on the specific problem as well as the target hardware. As energy efficiency is a big topic in today's computing centers, energy-efficient platforms such as ARM-based clusters are actively researched. In this work, we present a domain-specific approach, starting with the problem formulation in a domain-specific language (DSL), down to code generation targeting a variety of systems including embedded architectures. Furthermore, we present an approach to simulate embedded architectures to achieve an optimal hardware/software co-design, i.e., an optimal composition of software and hardware modifications. In this context, we use a virtual environment (OVP) that enables the adaptation of multicore models and their simulation in an efficient way. Our approach shows that execution time prediction for ARM-based platforms is possible and feasible but has to be enhanced with more detailed cache and memory models. We substantiate our claims by providing results for the performance prediction of geometric multigrid solvers generated by the ExaStencils framework.
Inner source (IS) is the use of open source software development practices and the establishment of an open source-like culture within organizations. The organization may still develop proprietary software but internally opens up its development. A steady stream of scientific literature and practitioner reports indicates the interest in this research area. However, the research area lacks a systematic assessment of known research work: No model exists that defines IS thoroughly. Various case studies provide insights into IS programs in the context of specific organizations but only few publications apply a broader perspective. To resolve this, we performed an extensive literature survey and analyzed 43 IS related publications plus additional background literature. Using qualitative data analysis methods, we developed a model of the elements that constitute IS. We present a classification framework for IS programs and projects and apply it to lay out a map of known IS endeavors. Further, we present qualitative models summarizing the benefits and challenges of IS adoption. The survey provides the first broad review of IS literature and systematic arrangement of IS research results.
Datenstromsysteme (DSSe) haben in den vergangenen Jahren ihren Weg aus der Forschung
in den betrieblichen Einsatz gefunden. Dabei sind bisher nur geringe Anstrengungen zu
deren Standardisierung erkennbar - und noch weniger Erfolge. Somit ist davon auszuge-
hen, dass der Einsatz verschiedener DSSe mittelfristig Realität bleibt. Allerdings ist die
Integration verschiedener DSSe für ein spezielles Einsatzszenario sehr arbeitsintensiv.
Eine Lösungsmöglichkeit sind föderierte Datenstromsysteme, die eine Abstraktionsschicht
oberhalb der realen Systemimplementierungen darstellen und somit deren Unterschiede
vor dem Anwender verbergen. Eine solche Föderation wird dadurch erschwert, dass sich
die heute verfügbaren Datenstromsysteme nicht nur durch die Syntax ihrer Anfragespra-
chen, sondern auch hinsichtlich ihrer Verarbeitungslogik unterscheiden. Das zeigt sich
darin, dass für vermeintlich gleiche Anfragen unterschiedliche Ergebnisse erzeugt wer-
den bzw. Ergebnisströme unterschiedliches zeitliches Verhalten aufweisen. Der möglichen
Abweichungen muss sich der Anwendungsentwickler bewusst sein und er muss vorgeben
können, welche davon er in Kauf nehmen will. Diese Arbeit beschreibt einen Ansatz, der
es dem Anwendungsentwickler wahlweise erlaubt, präzise zu definieren, wie eine Anfra-
ge verarbeitet werden soll oder dem System bestimmte Teilaspekte freizustellen, um so
Optimierungspotentiale zu nutzen.
Diese Arbeit orientiert sich an den konkreten Anforderungen des Data Stream Application
Manager (DSAM)-Projektes und greift dessen grundsätzliches Einsatzszenario auf. Insbe-
sondere ist dies die automatische Verteilung globaler Datenstromanfragen auf ein Netzwerk
heterogener Datenstromsysteme. Die entwickelten Verfahren wurden daher weitgehend in
DSAM integriert.
Memory forensics has become a powerful tool for the detection and analysis of malicious software. It provides investigators with an impartial view of a system, exposing hidden processes, threads, and network connections, by acquiring and analyzing physical memory. Because malicious software must be at least partially resident in memory in order to execute, it cannot remove all its traces from RAM. However, the memory acquisition process is vulnerable to subversion in compromised environments. Malicious software can employ anti-forensic techniques to intercept the acquisition and filter memory contents while they are copied.
In this thesis, we analyze 12 popular memory acquisition tools for Windows, Linux, and Mac OS X, and study their implementation in regard to how they enumerate and map memory. We find that all of the analyzed programs use the operating system to perform these tasks, and further illustrate this by implementing an open source memory acquisition framework for Mac OS X. In a survey of kernel rootkit techniques, that prevent or filter physical memory access, we show that all 12 tested programs are vulnerable to anti-forensics, because they rely on the operating system for critical functions.
To elliminate this vulnerability, we develop an operating system independent approach that directly utilizes the hardware to enumerate and map memory. By interacting with the PCI controller, we are able to safely avoid memory mapped device buffers while acquiring the entire physical address space. We program the page tables directly to map memory, forcing the MMU to facilitate arbitrary physical memory access from our driver's data segment. We implement our techniques into the open source memory acquisition frameworks Winpmem, Pmem, and OSXPmem, furthering the capabilities of memory acquisition software on the Windows, Linux, and Mac OS X platforms.
Finally, we apply our novel technique to related problems in memory forensics. Memory acquisition software for Linux can only be run on a system with the exact same kernel version and configuration as the system it was compiled on, due to dependencies on kernel data structures. We are able to create a minimal, kernel independent version of our module, which we inject into a compatible host module on the target. By hijacking the hosts data structures, we are able to load the infected module, redirect control flow, and communicate with it using a character device. A second innovative property of our acquisition approach is that, because we can enumerate the location of memory mapped device buffers, we are able to safely access memory regions unknown to the operating system. This allows us to acquire malicious firmware during of the memory acquisition process. We present a survey on firmware code and data in the physical address space, and show how we can capture the BIOS, PCI option ROMs, and the ACPI tables using our approach. We implement plugins for the open source memory analysis framework Volatility, which are able to extract the ACPI tables from memory and analyze them for malicious behavior.
Tailorable System Software
(2014)
System software, such as the operating system, provides no business value of its own. Its sole purpose is to serve the concrete application's needs -- that is, to map the functional and nonfunctional requirements efficiently to the functional and nonfunctional properties of the hardware.
Efficiency calls for specific, tailored system software; reusability demands generic solutions. To overcome this dilemma, most system software provides built-in static variability: It can be tailored at compile time with respect to a specific application–hardware use case.
In the case of Linux v3.2, this static variability is reflected by nearly 12000 configurable features that control the inclusion and exclusion of 28000 source files with 84000 conditional (#ifdef) blocks.
Variability by means of thousands of features imposes challenges for both system-software developers, who have to implement and maintain variability, as well as application developers/administrators, who have to understand the impact of all these features in order to configure a tailored variant.
Over the last four years, my research has focused on methods and techniques to improve the design, implementation, and maintenance of static variability in highly tailorable system software. My central contributions in this respect are:
(a) The CiAO approach, which employs language techniques to achieve excellent up-tailorability of embedded system software (towards the requirements of a specific application).
(b) The Sloth approach, which employs generative techniques to achieve down-tailorability of embedded system software (towards better exploitation of modern commodity hardware).
(c) The VAMOS approach, which employs cross-language analysis techniques and holistic variability modeling to improve on the long-term maintainability of multi-paradigmatic variability implementations in existing large-scale system software, such as Linux.
This research has been carried out in collaboration with seven doctoral researchers and master students from my research group, four of which have already defended.
In dieser vorliegenden Dissertation wird mit Hilfe von numerischen Simulationen der Einfluss der spannungsinduzierten Doppelbrechung auf die Eigenpolarisationszustände von Festkörperlasern untersucht. Es werden die grundlegenden physikalischen Zusammenhänge zwischen den thermischen, mechanischen und elektromagnetischen Feldern behandelt, die durch die nicht homogen verteilte Absorption des Pumplichtes im Laserkristall auftreten. Die durch diesen photoelastischen Effekt dominierten Eigenpolarisationszustände im Resonator werden untersucht. Als eine Anwendung werden die Eigenpolarisationszustände, die Ausgangsleistung, die Strahlqualität und die Polarisationsreinheit eines Laserresonators mit Brewsterplatten in Abhängigkeit von den Kristallschnitten des Nd:YAG-Laserkristalls erforscht. In einem zweiten Beispiel wird mit Hilfe des polarisationsabhängigen Stabilitätsverhaltens eines Laserresonators mit einem Nd:YAG-Kristall ein Ausgangsstrahl einer bestimmten zirkularen Polarisation ohne zusätzliche optische Elemente erzeugt. Dabei wird ebenfalls der Einfluss der Kristallschnitte auf die Stabilitätsbereiche, die Ausgangsleistung und die Strahlqualität untersucht.
Kapitel 2 befasst sich mit den physikalischen Grundlagen der Mechanik und Optik, welche in dieser Arbeit angewendet werden. Dabei wird ein Überblick über verschiedene Festkörperlasermaterialen und deren Einteilung in Kristallklassen gegeben. Es wird auf die Modellierung des mechanischen Steifigkeitsverhaltens des Kristalls mit Hilfe von anisotropen und isotropen Materialmodellen eingegangen. Als weiterer physikalischer Effekt im Kristall wird die Kopplung zwischen den mechanischen und optischen Feldern sowie deren mathematische Handhabung beschrieben.
Kapitel 3 behandelt die Eigenpolarisationszustände in Festkörperlaserresonatoren mit Nd:YAG-Kris-tallen aufgrund thermisch induzierter Doppelbrechung. Es wird darin auf Eigenpolarisationszustände im Resonator und ihre Berechnung durch den Jonesformalismus eingegangen. Die Eigenmoden des elektromagnetischen Feldes innerhalb eines Resonators werden kurz erwähnt. Für die Berechnung der Ausgangsleistung und Strahlqualität eines Resonators wird die Dynamische Multimode Analyse kurz vorgestellt. Um mögliche Stabilitätsuntersuchungen von Festkörperlasern durchführen zu können, wird der parabolische Fit der Brechungsindizes, die ABCD-Matrixmethode und der polarisationsabhängige Brechungsindex mit dessen Auswirkung auf die Resonatorstabilität beschrieben.
Im Kapitel 4 wird eine Untersuchung zur Verbesserung der Strahlqualität und Ausgangsleistung durch Verwendung eines Polarisationsfilters in einem Resonator mit einem Nd:YAG-Kristall durchgeführt. Darin werden nach dem Vorstellen des numerischen Modells die auftretende Temperaturverteilung und die auftretenden mechanischen Spannungen im Kristall aufgrund des absorbierten Pumplichtes dargestellt. Anschließend wird die spannungsinduzierte Doppelbrechung gezeigt. Die resultierenden Eigenpolarisationszustände im Resonator mit und ohne Brewsterplatten werden mit dem Jonesformalismus berechnet. Abschließend werden die erzielten Ausgangsleistungen, die Strahlqualität und die Polarisationsreinheit in Abhängigkeit von den Kristallschnittrichtungen verglichen. Dabei werden der [100]-Schnitt mit einer Rotation des Kristalls um seine eigene Längsachse um 0° und 90°, der [110]-Schnitt mit einer Rotation um 0°, 45° und 90° und der [111]-Schnitt ohne Variation des Rotationswinkels betrachtet.
Ein Beispiel zur Erzeugung eines bestimmten Polarisationszustandes mittels unterschiedlichen Stabilitätsverhalten durch spannungsinduzierte Doppelbrechung im Resonator wird im Kapitel 5 gezeigt. Dazu werden nach dem Vorstellen des zugrunde liegenden Modells der Temperaturverlauf und die auftretenden mechanischen Spannungen im Kristall untersucht. Des Weiteren werden die resultierenden, von der Schnittrichtung des Kristalls abhängige Brechungsindizes und Doppelbrechungsmuster berechnet. Dabei werden die Schnittrichtungen und Rotationen des Laserkristalls wie im vorhergehenden Kapitel variiert. Es werden die auftretenden Eigenmodeprofile gezeigt. Durch die unterschiedliche Ausprägung der Doppelbrechung entstehen bei Variation der Länge des Resonators bestimmte Bereiche, in denen nur der radiale oder nur der azimutale Eigenpolarisationszustand stabil ist. Diese Bereiche werden in Abhängigkeit vom Kristallschnitt und -drehung quantitativ berechnet. In diesen Bereichen werden die Strahlradien der Eigenmoden der ersten Ordnung, die Ausgangsleistung und die Strahlqualität ermittelt und miteinander verglichen.
Im letzten Kapitel 6 werden abschließend die Erkenntnisse dieser Arbeit zusammengefasst, auf Einschränkungen der erzielten Ergebnisse eingegangen und ein möglicher Ausblick gezeigt.
Zusammenfassend lässt sich aussagen, dass sich mit Hilfe dieser numerischen Methoden genaue Analysen der Eigenpolarisationszustände in einem Festkörperlaserresonator aufgrund der spannungsinduzierten Doppelbrechung durchführen lassen, was mit den bisherigen auf analytischen Formeln aufbauenden Methoden nur eingeschränkt oder gar nicht möglich war. Mit den hier vorgestellten Methoden lassen sich effektiv Laserresonatoren simulieren und entwickeln, welche bestimmte Polarisationseigenschaften aufweisen sollen.
A quantum key distribution (QKD) system may be probed by an eavesdropper
Eve by sending in bright light from the quantum channel and analyzing the backreflections. We propose and experimentally demonstrate a setup for mounting
such a Trojan-horse attack. We show it in operation against the quantum cryptosystem Clavis2 from ID Quantique, as a proof-of-principle. With just a few
back-reflected photons, Eve discerns Bobʼs (secret) basis choice, and thus the raw key bit in the Scarani–Acín–Ribordy–Gisin 2004 protocol, with higher than 90% probability. This would clearly breach the security of the cryptosystem. Unfortunately, Eveʼs bright pulses have a side effect of causing a high level of afterpulsing in Bobʼs single-photon detectors, resulting in a large quantum bit error rate that effectively protects this system from our attack. However, in a Clavis2-like system equipped with detectors with less-noisy but realistic characteristics, an attack strategy with positive leakage of the key would exist. We confirm this by a numerical simulation. Both the eavesdropping setup and strategy can be generalized to attack most of the current QKD systems, especially if they lack proper safeguards. We also propose countermeasures to prevent such attacks.
This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points.
In der gängigen Praxis der automatisierten Fertigung werden die Steuerungsprogramme
von Experten geschrieben und im Versuch validiert. Nachträgliche Änderungen an bereits
getesteter Software bergen Risiken, die einer nachträglichen Berücksichtigung von
Komponentenfehlern entgegenstehen. Als eine spezielle Ausprägung der fehlertoleranten
Regelung liefert die fehlerverdeckende Steuerungsrekonfiguration einen konzeptionellen
Rahmen, in dem fehlerbedingte Änderungen der Prozessdynamik in den Reglerentwurf
miteinbezogen werden können. Herauszustellen ist, dass ein bestehendes Steuerungssystem
unverändert im rekonfigurierenden Regelkreis erhalten werden kann und bis zum
Auftreten eines Fehlers die Kontrolle über den Prozess behält.
Das Konzept der fehlerverdeckenden Steuerungsrekonfiguration sieht vor, zwischen Nominalregler
und fehlerbehafteter Strecke einen Rekonfigurator zu schalten. Seine Aufgabe
ist es, den Signalfluss zwischen Nominalregler und fehlerbehafteter Strecke so zu modifizieren,
dass der rekonfigurierende Regelkreis zulässiges Verhalten aufweist, während die
Auswirkung eines Fehlers vor dem Nominalregler verdeckt wird.
Das Konzept der fehlerverdeckenden Steuerungsrekonfiguration wurde anhand von linearen
zeitkontinuierlichen Systemen, siehe [Ste05], entwickelt und wird in dieser Arbeit
auf ereignisdiskrete Systeme übertragen. Die Dynamik ereignisdiskreter Systeme ist
durch spontane Zustandswechsel charakterisiert, die mit einem nach außen hin sichtbaren
Ereignis einhergehen. Technische Systeme, die einer ereignisdiskreten Modellierung
zugänglich sind, finden sich z. B. in der Fertigungsautomatisierung. Auf Basis der Supervisory
Control Theory, siehe [RW87, RW89], werden der Entwurf fehlerverdeckender,
ereignisdiskreter Rekonfiguratoren diskutiert und die Ergebnisse anhand einer Machbarkeitsstudie
validiert.