005 Computerprogrammierung, Programme, Daten
Refine
Year of publication
Document Type
- Doctoral Thesis (26)
- Report (3)
- Article (2)
- Habilitation (1)
- Master's Thesis (1)
- Working Paper (1)
Has Fulltext
- yes (34)
Keywords
- Datenanalyse (3)
- Ingenieurwissenschaften (3)
- Maschinenbau (3)
- Produktentwicklung (3)
- Produktionstechnik (3)
- Betriebssystem (2)
- Computersimulation (2)
- Datenstrommanagementsystem (2)
- Digital Engineering (2)
- Finite-Elemente-Methode (2)
Institute
- Technische Fakultät (15)
- Department Informatik (8)
- Department Maschinenbau (3)
- Lehrstuhl für Konstruktionstechnik (KTmfk) (3)
- Department Elektrotechnik-Elektronik-Informationstechnik (2)
- Department Medienwissenschaften und Kunstgeschichte (1)
- Fachbereich Wirtschaftswissenschaften (1)
- Institut für Medizininformatik, Biometrie und Epidemiologie (1)
- Naturwissenschaftliche Fakultät (1)
- Philosophische Fakultät und Fachbereich Theologie (1)
Concurrency platforms are a widespread tool for writing comprehensible and maintainable programs utilizing the parallel computing resources provided by modern hardware. Language-based concurrency platforms like Google’s Go expand the benefits of concurrent programming to system requests by providing a seemingly blocking API and thus allow, e.g., the creation of simple while high-performant network services. Unfortunately, existing concurrency platforms rely on outdated synchronous I/O and readiness interfaces of the underlying operating system. Schmaus et al. proposed a general and highly efficient way to integrate system-request capabilities into a prototypical concurrency platform employing modern queue-based asynchronous system-request interfaces (e.g., Linux’s io_uring). This thesis takes the proposed integration one step further by adapting the inexpensive and effective work-stealing scheme to the request completions asynchronously generated by the operating system.
Furthermore, it presents a novel synthesis of the concurrency platform worker-suspension mechanisms with the queue-based system-request interface. This synthesis, combined with specialized task-suspension kernel-interface suspendfd, increases the runtime systems throughput while reducing its latency. This thesis proposes suspendfd, an interface that allows efficient futex-like suspension while passing information about where new work can be obtained to an awoken worker. This information helps to mitigate the initial uninformed randomized work stealing. Additionally, using a specialized in-kernel object (i.e., suspendfd) allows the operating system to participate in the suspension mechanisms by directly posting notifications about completed system requests (e.g., a completed read() operation). This participation prevents further context-switches caused otherwise by an additional round-trip through user space. Micro-benchmarks show that posting notifications from the kernel is an adequate tradeoff compared to busy polling between continuation latency (42 µs higher latency) and power consumption (55.51% less energy needed). A network echo server benchmark shows that completion stealing increases the runtime system’s throughput by 54.87% and using a suspendfd-based sleep strategy by 61.2% compared to the status quo. Furthermore, a suspendfd-based sleep strategy can reduce the time needed to search the Linux kernel source for a fixed string by 4.95%.
Die Verschmelzung der physikalischen und virtuellen Welt prägt die heutigen Lebens- und Arbeitswelten. Neue technische Lösungen meist in Form smarter Produkt-Service-Systeme - sollen individuelle Bedürfnisse und Bedarfe adressieren. Adoptionsprognosen in diesem Kontext erfordern zuverlässige Daten, Informationen und geeignete Analyseinstrumente für die Vorhersage individuellen Verhaltens. Speziell in Vorimplementierungsphasen sind Adoptionsprognosen ein wertvolles Instrument zur Vorhersage künftiger Ereignisse oder Situationen.
Diese Arbeit bietet einen interdisziplinären Einblick in Adoptions- und Akzeptanzprozesse unter Zugrundelegung emotionaler Antizipationen im Sinne der Simulation der zukünftigen Realität menschlicher Entscheidungsprozesse. Die auf Basis eines mehrphasigen Mixed-Methods-Ansatzes gehobenen Erkenntnisse der Arbeit zeigen, dass das thesenbasiert konzipierte und empirisch validierte Affective Adoption Forecasting Framework A²FORE den aktuellen Herausforderungen der Abschätzung individueller Adoptionsentscheidungen vor Markteinführung neuartiger smarter Produkt-Service-Systeme mit besonderem Fokus auf vulnerablen Konsumenten im Gesundheitswesen begegnen kann. A²FORE spiegelt die Nutzbarmachung antizipatorischer Emotionen als Prognoseelement für den Erfolg smarter Produkt-Service-Systeme wider. Dies entspricht der praxisbezogenen Forderung der Entwicklung eines handhabbaren Modells zur Adoptionsprognose auf Basis eines User Acceptance Testings im Rahmen von Vorimplementierungsphasen.
Polyphonic vocal music is an integral part of music cultures around the world. For studying performance aspects and cultural differences, the analysis of recorded audio material has become of increasing importance. This thesis contributes several computational tools for processing, analyzing, and exploring singing voice recordings using methods from signal processing, computer science, and music information retrieval (MIR). First, we develop an approach for applying time-varying pitch shifts to audio signals based on non-linear time-scale modification (TSM) and resampling techniques. We show that our method can be used to adjust intonation (fine-tuning of pitch) in vocal recordings, e.g., in postproduction contexts. Computational analysis of polyphonic vocal music typically requires annotations of the singers’ fundamental frequency (F0) trajectories, which are labor-intensive to generate and may not be available for a particular recording collection. As a second contribution, we present an approach to assess the reliability of automatically extracted F0-estimates by fusing the outputs of several F0-estimation algorithms. In this way, our approach enables the analysis and exploration of large unlabeled audio collections. One major challenge for computational analysis of polyphonic singing constitute stylistic elements such as pitch slides and pitch drifts, which can introduce blurring in analysis results. As a third contribution of this thesis, we present computational tools for handling such peculiarities. In particular, we develop musically motivated filtering techniques to detect stable regions in F0-trajectories and compensate for pitch drifts. Furthermore, our tools offer interactive feedback mechanisms that allow domain experts to incorporate musical knowledge. Development and evaluation of computational tools for analyzing polyphonic singing typically require suitable multitrack recordings with one or several tracks per voice, e.g., obtained from close-up microphones attached to a singer’s head and neck. However, such recordings are challenging to produce and thus of limited availability. As an additional contribution of this thesis, we introduce carefully organized and annotated multitrack research corpora of Western choral music and traditional Georgian vocal music, which are publicly accessible through interactive interfaces. Furthermore, considering these two culturally different forms of vocal music as concrete application scenarios, we evaluate our interactive computational tools and demonstrate their potential for corpus-driven research in the field of computational ethnomusicology.
With the ongoing development of mobile and desktop operating systems, existing file systems also get updated with new features, or the vendors introduce even new file systems. Since Android 7.0, the predominant full disk encryption gets step-by-step replaced by the new file-based encryption (FBE), implemented as an ext4 feature. Since Android 10, this disk encryption scheme gets mandatory for all new Android devices.
On the side of desktop and server OSs, Microsoft introduced an all-new file system called Resilient File System (ReFS) which should replace the NFTS in the long run. Starting as a file system for servers, ReFS introduces new features to make data storage more robust and efficient.
This work investigates the new technologies ext4 FBE and ReFS in several aspects of forensic data extraction. We investigate the amount of information leakage through unencrypted metadata in Android’s FBE. We propose a generic method and provide appropriate tooling to reconstruct forensic events on Android smartphones encrypted with FBE, which require no knowledge of the encryption key. Based on a dataset of 3903 applications, we show that files’ metadata can be used to reconstruct the name, version, and installation date of all installed apps. Furthermore, based on WhatsApp, we show that information leakages through metadata can even be used to reconstruct a user’s behavior depending on a specific app.
To furher enhance the forensic data extraction of FBE encrypted Android disks, given a raw memory image, we present a new encryption key recovery method tailored for FBE. Furthermore, we extend The Sleuth Kit to automatically decrypt file names and file contents when working on FBE-enabled ext4 images, as well as the Plaso framework to extract events from encrypted ext4 partitions. Last but not least, we argue that the recovery of master keys from FBE partitions was straightforward due to a flaw in the encryption key derivation method by Google.
On server and desktop systems, Microsoft has released ReFS, that internal structures are not officially documented. Therefore we reverse-engineered these internal structures and behavior and documented them. Based on these structures and the access processes that modify them, approaches to recover (deleted) files and older states from ReFS formatted partitions are shown. We also evaluate our implementation and the allocation strategy of the ReFS driver concerning the accuracy, runtime, and the ability to recover older file states.
At last, with the knowledge of the internal ReFS structures and the threat of flaws in forensic software in mind, we implemented a structure-aware coverage guided fuzzy testing framework explicitly tailored to ReFS to find undetected security-critical flaws. With the new complex features of ReFS, the driver is growing more extensive and more complex, increasing the attack surface of the Windows kernel. Attackers can often use security-critical bugs in file system drivers to escalate privileges by mounting a well-prepared file system. Such an attack is also relevant in forensic data extraction because criminals can use it to prepare malicious disks, which will hamper or completely circumvent the extraction process by compromising the analysis environment.
We demonstrate the effectiveness of our fuzzing approach by finding 27 unique payloads that panic the Windows kernel when mounting or accessing ReFS partitions. Microsoft confirmed those bugs and acknowledged eight unique CVEs which allow remote code execution attacks.
With our overall findings, the forensic community should be well prepared for extracting data from modern file systems used by mobile and desktop systems. With the help of the new proposed fuzzy testing framework, forensic software can be hardened against severe anti-forensic methods by patching the discovered flaws.
Komplexität ist eine der großen Herausforderungen unserer Zeit. Daher müssen Ingenieure besonders in der Entwicklung die stetig steigende Komplexität in diversen Domänen beherrschen, um den Produkt- und damit Unternehmenserfolg zu realisieren. Für die zweckmäßige Modellierung der vielen verschiedenen Elemente und Relationen komplexer Systeme sind domänenübergreifende, integrative und damit holistische Modellierungsansätze erforderlich, welche jeweils kontextbezogen nützliche, mathematische Analyse- sowie Visualisierungsmöglichkeiten erlauben um Entwicklern eine multikriterielle Bewertung bestimmter Handlungsalternativen zu ermöglichen. Derzeit bestehende Ansätze eignen sich hierfür jedoch unter anderem aufgrund methodischer Defizite, ihres hohen Modellierungsaufwands, eingeschränkter Analyse- und Visualisierungsmöglichkeiten oder ihrer geringen Unterstützung bei der multikriteriellen Bewertung nur eingeschränkt.
Daher wird in der vorliegenden Arbeit ein Konzept vorgestellt, mit dem Entwickler beim holistischen Komplexitätsmanagement in der Produktentwicklung unterstützt werden, indem kontextbezogen jeweils geeignete Methoden für die integrierte Modellierung, Analyse, Visualisierung und Bewertung komplexer Systeme für eine zielgerichtete Entscheidungsfindung zur Verfügung gestellt werden. Zur effektiven und effizienten Anwendung der Methodik wird ein IT-Werkzeug präsentiert, das Entwickler bei deren Aufgaben im Rahmen des holistischen Komplexitätsmanagements unterstützt.
Query processing is a traditional yet still ongoing field of research. Its significance is derived from the increase of data created and processed every day and the opportunities provided by the analysis of the data. In today’s world, complete businesses are built on top of sophisticated data processing capabilities. However, with the increase of data, processing these huge amounts of data becomes more and more challenging. This is not only because of the time and resources it takes to process such amounts of data but also due to the energy costs. Consequently, researchers broadened the range of processing architectures to investigate for query processing beyond traditional processor-based systems. Next to programmable graphic processing units (GPUs), field programmable gate arrays (FPGAs) have become of great interest due to their unique features. FPGAs not only allow for the construction of highly optimized hardware circuits for specific tasks but also enable the adaption of hardware to the tasks during runtime. Hence, many researchers have presented various proposals to exploit the features provided by FPGAs. Although the proposed systems can achieve high throughput and efficiency in general, they are often not able to accelerate queries that haven’t been considered during their design. Performance and efficiency can only be gained best through specialization, and thus, a system should adapt to an incoming and unknown query. This is possible with FPGAs due to their ability to be reconfigured fully or in parts during runtime. However, this comes at the cost of high startup times as the FPGA has to be configured according to the query prior to the execution of the query. Furthermore, it is almost impossible to generate hardware configurations for every possible query.
This thesis introduces an innovative FPGA-based near-data processing system able to process a wide variety of queries at I/O-rate (line-rate). It is based on reconfigurable and parametrizable accelerators. The accelerators are composed of parametrizable modules within a library. These modules do not only implement a specific operator for a specific type but are optimized to implement operators for multiple types or even multiple functions without a drastic increase of resources. Another contribution of this thesis is the concept of optimistic query processing for demanding operators such as the join and regular matching operator. It is based on the idea to approximately filter as much data as possible in hardware without removing tuples that should be kept. The resulting, often reduced, intermediate table is guaranteed to be a superset of the accurate filter operation. A software-based operator implementation can then be applied on an intermediate table with less tuples to finalize the operation. As an example, the implementation of a module for regular expression matching is presented.
Equipped with a parameter sequencer, accelerators assembled from this library are able to implement a greater variety of queries by setting the parameters of the modules according to the query to process. However, the schema which the tables are stored in also influences the design of the accelerator and, therefore, may limit the types of queries it can implement. For this, a hardware unit called ReOrder is introduced. It decouples the table schema and storage layout from the accelerator enabling all accelerators to be used on every table with row-oriented and column-oriented storage layouts. Even though the developed accelerators are able to implement a wide variety of queries, no one-fits-all accelerator is possible. Consequently, the system is designed to concatenate multiple partially reconfigurable (i.e., exchangeable) accelerators without a decrease in tuple throughput. This increases the types of queries that can be processed even further. As accelerators might not use all available resources within a partially reconfigurable FPGA region, the idea of in situ statistics generation is proposed. In situ statistics modules can utilize the free resources to gather information on the table that is processed by an accelerator without additional costs in terms of time.
Complementary to the hardware related parts already mentioned, a control software managing the execution of a query on such a system is presented as well. Starting from the basic components needed to execute queries on the platform, the description goes into depth on the particularities of such a FPGA-based query execution system. Especially the query placement problem which describes the problem of finding a query-specific-configuration of the system’s hardware according to an incoming query is formulated. In addition, the challenges to obtain an optimal placement is discussed and exemplified using the problem of buffer assignment. Afterwards, the parameters of the modules have to be generated. In this regard, an algorithm to obtain the parameters for a ReOrder unit is presented and evaluated in depth. Additionally, considerations about parameter generators for a histogram module and the optimistic regular expression matching module are provided.
Finally, an implemented prototype of the system called ReProVide unit has been evaluated. It is able to provide I/O-rate processing of simple as well as complex queries. Compared to a software-based in-memory database system executed on an ARM processor, queries could be executed 19.9× faster on the prototype on average. When executed on an x86 processor, comparable execution times have been observed. This means the prototype system storing the tables on two solid states drives was able to process queries as fast as an x86 system holding the tables in memory. Furthermore, the prototype built is shown to be very energy-efficient, consuming only less than 25% of the energy consumed by the x86 system on average.
Übersetzer sind der Grundpfeiler der Softwareentwicklung. Seine veraltete, rein sequentielle Arbeitsweise skaliert jedoch nur noch ungenügend mit dem Umfang heutiger Softwareprojekte. Die für die Entwicklung unproduktive Laufzeit des Übersetzers nimmt einen immer größeren Anteil am Entwicklungszyklus ein.
Der Großteil der Laufzeit wird für Datenflussanalysen aufgebracht. Die von ihnen berechnete Informationsbasis ermöglicht erst die Anwendung von Optimierungen, um z.B. die Laufzeit eines Programmes zu reduzieren.
Diese Dissertation beschreibt das Framework ParCan, dass es ermöglicht fixpunktbasierte Datenflussanalysen datenparallel auf einer Grafikkarte (GPGPU) auszuführen.
Durch die Integration von ParCan in den LLVM-Übersetzer konnte dessen Laufzeit um bis zu 31% reduziert werden.
Im Rahmen der Arbeit wurden weitere Fragestellungen wie die Effizienz von Graphstrukturen sowie die effiziente, deadlock-freie Synchronisation von Threads auf der GPU bearbeitet.
It was not before the advent of powerful computers that corpus linguistics has developed into a widely applied research methodology. Indeed, corpus linguistics heavily relies on computer-powered analysis tools. They get used on a daily basis by corpus linguists to retrieve examples and analyze authentic data from corpora of extensive sizes. Despite their indisputable importance, repetitive remarks highlight the fact that corpus analysis tools have evolved little since their early days. Concordances, frequency lists and collocation extraction still constitute the core functionalities of most corpus tools.
With the aim to incentivize new functional developments, this thesis presents research on open demands in current corpus research practice and related requirements for tools support. It builds on the assumption that more user-centered research is needed to bridge the gap between mainly computationally trained tool developers and their linguistic expert users, who come with specialized domain knowledge and often sophisticated analytical needs. The research is approached by means of three user investigations that enquire about corpus research workflows and analysis activities as well as theoretical principles and methodological considerations in corpus linguistics research practice. This way a comprehensive picture of the corpus usage situation is assembled by combining insights from open ended enquiries (interviews) with quantitative results on selected aspects of the corpus analysis scenario (questionnaire) derived from enquiries with overall more than 100 corpus users. Based on the results, a range of open demands for corpus research and tools are identified and discussed. They relate to (1) corpus resources, (2) general aspects of tools, (3) corpus analysis procedures, and (4) best practices. The results show that open demands address challenges on very different operational levels, ranging from the availability of corpus resources and reliable annotations, technical requirements related to scalability and interoperability issues, usability and technical and methodological skills up to proper functional demands. The thesis discusses potential paths to address the open demands, and provides pointers to recent developments in corpus linguistics and related fields, in particular computational linguistics and natural language processing as well as linguistic information visualization.
The research contribution of this thesis is twofold. On the methodological level, it elaborates on methods and challenges for user-centered research on tools for open-ended tasks and provides entrance points for further user-centered research by identifying and organizing, as reference, the basic building blocks of corpus linguistics research. On the content level, it provides first insights on user perspectives and needs related to the corpus research practice. It describes concrete demands and discusses paths to their solution. This way, it prepares the ground for further in-depths studies and user-centered developments of new corpus functionalities for specific demands.
Fertigungsprozesse unterliegen zwangsläufig Schwankungen, die dazu führen, dass die Qualität gefertigter Produkte variiert. Da sich Qualitätsschwankungen negativ auf die Eigenschaften von Produkten auswirken, strebt das Robust Design eine Minimierung der Schwankungen an. Um dieses Ziel zu erreichen, können während der Produktentwicklung unterschiedliche virtuelle Absicherungswerkzeuge eingesetzt werden.
Im Rahmen dieser Arbeit wird ein holistisches Vorgehensmodell präsentiert, das als Rahmenwerk für den systematischen Einsatz und die Validierung von virtuellen Absicherungsmethoden dient. Dieses neuartige Vorgehen ermöglicht den zielgerichteten Einsatz unterschiedlicher virtueller Absicherungswerkzeuge, die während der Produktentwicklung eingesetzt werden. Darüber hinaus wird erstmalig aufgezeigt, wie Toleranzanalysemodelle für Mechanismen validiert werden können.
Produktentwicklern wird damit ein allgemeingültiger, anpassbarer Ansatz zur Verfügung gestellt, der sie befähigt virtuelle Absicherungsmethoden zielgerichtet einzusetzen und deren Aussagekraft mit Hilfe von Validierungen kritisch zu bewerten.
Der Autor zeigt darüber hinaus, wie das Vorgehensmodell an praxisrelevante Problemstellungen angepasst werden kann. Anhand der Kinematik einer Röntgenblende, die als Demonstrator dient, wird dies veranschaulicht.
Few recent areas of research have had a more significant impact on industrial production than Machine Learning (ML). Ranging from intelligent quality inspection to fully autonomous robots and vehicles, Machine Learning in industrial applications (industrial ML) is aligned to permanently change how production systems operate. By offering powerful tools for automation and intelligent decision-making, it has become a highly sought-after technology across most industries. Applying industrial ML to various applications in the production system has also become a popular focus of recent research efforts. Several breakthrough applications and designs have further fueled the excitement for ML-empowered systems. Thus, many companies have increased their investment in this technology to drive disruptive innovation in cost-effectiveness and performance.
Frequently, new scientific findings and discoveries offer new potentials for industrial ML applications. However, these applications depend on several factors, such as the deployment environment, the amount of available data, and the ML approach chosen, which makes application development a non-trivial pursuit. As a result, while ML is applied to a growing number of systems and environments in industry, many aspects of its development remain unclear.
To alleviate the issues blocking further advancement in industrial ML, various research endeavors have been undertaken to better understand how ML applications work in industrial information systems, and which requirements and challenges practitioners face during development and implementation. This thesis ties into the current research and further contributes by exploring the development process, the challenges, and the value of libraries and frameworks in industrial ML applications.
To this end, this thesis contains an extensive literature review and four independent studies on real-world applications of ML in the context of production at a large automobile OEM. The systematic literature review explores current research on industrial ML applications with a comprehensive quantitative and qualitative analysis. The four studies each contain a Design Science research case on the development and design of an industrial ML application. These four ML applications are 1) An Anomaly Detection System in the brownfield on a Monorail Conveyor, 2) A multi-model Quality Inspection application in Laser Beam Welding with Supervised Learning, 3) A Deep Reinforcement Learning system for fully Autonomous Assembly on industrial robots, and 4) A self-service Quality Inspection Toolkit augmented with Machine Teaching and Interactive ML functions.
In these studies, the design and development process of the application is documented in detail. Furthermore, during each study, the benefits, challenges, creative application solutions, and results are reported upon, analyzed, and critically discussed. Finally, the findings across all studies are structured in the discussion, and connections between their insights are reviewed.
In our literature review it is discovered that the vast majority of research on industrial applications of ML apply Supervised Learning as their primary AI approach. Unsupervised and Reinforcement Learning are used much less frequently.
In our first study an Unsupervised and Semi-Supervised Anomaly Detection system for a Monorail Conveyor system is developed during ramp up that could perform well, even in the absence of labeled data. The system was able to be implemented much quicker and with fewer requirements than a comparable approach using Supervised Learning.
In the second study a multi-model Quality Inspection System for Laser Beam Welding is created. The system used multiple ML architectures and algorithms to detect four distinct quality measures purely from a single source of grayscale images. It thereby outperformed the conventional methods in speed, accuracy and range of detected faults.
The third study exhibits the development of a Deep Reinforcement Learning agent on an Industrial Robot that performs fully autonomous assembly of a body part. While the prototype performs well and is able to execute successful assemblies in the real world, the effort and expertise that was required during development are disproportionate to the gained value. However, given specific circumstances that require the particular strengths of DRL, it may present a viable option.
In the last study Machine Teaching and Interactive Machine Learning is used to augment a self-service computer vision application. By using readable User Interfaces, explainability, and active learning users are enabled to train and evaluate their own Image Classification and Object Detection models. While also useful for evaluating production related quality inspection or detection use cases, it was especially remarkable that the users reported great enthusiasm in experiencing and better understanding modern ML applications. In this case, Transfer Learning proved to be a major enabler for fast and accessible model training.
By developing industrial ML applications, it becomes apparent to us that just as innovation requires invention and commercialization, an industrial ML application requires an ML model with context and utility.
In summary, this dissertation presents exploratory insights into the rapidly growing field of industrial Machine Learning applications by researching applications in context. Subsequently, the performed studies may serve as an example of how industrial ML applications can be designed, developed, and evaluated in the context of the manufacturing industry.
Die Bedeutung von Daten nimmt in allen Lebensbereichen eine immer größere Rolle ein. Diese Entwicklung kann ebenso in der Produktentwicklung und Produktentstehung beobachtet werden. Die Verknüpfung der virtuellen Produktentwicklung mit der durchgängigen und ganzheitlichen Datennutzung wird als «Digital Engineering» bezeichnet. Die Umsetzung des Digital Engineering geht mit einem starken Wandel und einer Veränderung der bisherigen Rollen der beteiligten Personen und der verwendeten Werkzeuge einher. Dabei gilt es, möglichst alle zur Verfügung stehenden Daten zu nutzen und diese Daten mittels Algorithmen des Maschinellen Lernens zu verarbeiten. In der Produktentstehung existieren zahlreiche Geometriedaten (z.B. CAD Modelle oder Messdaten) oder mit einer Geometrie verknüpfte Daten (z.B. numerische Simulationen und deren Ergebnisse). Im Rahmen der vorliegenden Dissertation wurde die Methode der sphärischen Detektorflächen entwickelt, welche es ermöglicht, beliebige Geometrien in eine einheitliche numerische Matrix zu überführen. Die entwickelte Methode kann ebenfalls genutzt werden, um Informationen, die mit der Geometrie verknüpft sind, in weitere dieser einheitlichen Matrizen umzuwandeln und so Algorithmen des Maschinellen Lernens zur Verfügung zu stellen. Das entwickelte Vorgehen wird anhand von drei unterschiedlichen Anwendungsbeispielen umgesetzt und es werden alle notwendigen Teilschritte detailliert beschrieben. Dies umfasst auch die Ableitung der sogenannten «DNA einer FE-Simulation».
This dissertation presents symbolic loop compilation, the first full-fledged approach to symbolically map loop nests onto tightly coupled processor arrays (TCPAs), a class of loop accelerators that consist of a grid of processing elements (PEs). It is:
Full-fledged because it covers all steps of compilation, including space-time mapping, code generation, and generation of configuration data for all involved hardware components. A full-fledged compiler is paramount because manual mapping for accelerators, such as TCPAs, is difficult, tedious, and, most of all, error-prone. Symbolic because symbolic loop compilation assumes the loop bounds and number of allocated PEs to be unknown during compile time, thus allowing them to be chosen at run time.This flexibility benefits resource-aware applications where the number of PEs is known only at run time.
Symbolic loop compilation is a hybrid static/dynamic approach with two phases:
At compile time, all involved NP-hard problems (such as resource-constrained modulo scheduling) are solved symbolically, resulting in a so-called symbolic configuration, which is a space-efficient intermediate representation parameterized in the loop bounds and number of PEs.
This phase is called symbolic mapping.
Because it takes place at compile time, there is ample time to solve the involved NP-hard problems.
At run time, for each requested accelerated execution of a loop program with given loop bounds and number of allocated PEs, concrete PE programs and configuration data are generated from the symbolic configuration according to these parameter values.
This phase is called instantiation.
In the context of these two phases, this dissertation presents the following contributions:
1. Symbolic modulo scheduling is a technique for solving resource-constrained modulo scheduling for multi-dimensional loop nests when the loop bounds and number of available PEs are unknown. We show that a latency-minimal solution can be found if the number of PEs is known beforehand and a near latency-minimal solution if it is not. 2. Polyhedral syntax trees are a space-efficient, parameterized representation of a set of PE program variants from which the necessary concrete PE programs are generated at run time. 3. Instantiation includes methods to generate concrete programs and configuration data from a symbolic configuration in a manner whose time complexity is not proportional to the loop bounds or number of allocated PEs. 4. Run-time enforcement for loops is a technique that utilizes the flexibility granted by symbolic loop compilation to enforce requirements on non-functional properties by dynamically adapting the mapping before execution. An example is to allocate a number of PEs that satisfies a given latency bound.
In summary, the methods presented in this dissertation enable, for the first time, the full-fledged symbolic compilation of loop nests onto TCPAs.
Without these methods, a given loop nest would need to be fully recompiled each time the loop bounds or number of available PEs change, which would render run-time mapping impractical and even conventional compilation overly time- and space-consuming.
This thesis focuses on protecting data in use (while being processed). We investigate the requirements
of an execution container which makes it possible to eliminate the need for fully trusting a single
service operator/provider. The main approach relies on a set of properties of a trustworthy
computational model. The foremost purpose is to protect users’ data privacy in data-centric services
ranging from local to remote environments.
First, we get a closer look at protecting users from being targeted by software (SW) suppliers or
attackers exploiting the SW update systems. The main goal is to ensure update consistency and to
avoid freeze updates, i.e. to protect users against malicious SW update. We define the conditions
for a secure SW update system that shall allow the user to detect malicious updates. Based on those
conditions, we depict the Malicious Update Detector (MUD) framework. Similar to the Certificate
Transparency (CT), MUD is designed utilizing Merkle Tree (MT) data structure to keep logs of SW
versions. The design aims at achieving transparency of the entries of SW versions in the system
using a Primary log and a set of attestor logs. Consequently, that intends to maintain update
consistency. Nevertheless, such a design might come with performance overhead. Thereupon, a
prototype is evaluated to understand the applicability of the MUD framework on Ubuntu APT as
a proof-of-concept.
Second, the attention is directed towards the need to protect users’ data in remote environments
against malicious attackers and/or honest-but-curious operators/providers. We provide a conceptual
abstract definition of a trustworthy execution container, which we call Sealed Computation (SC).
The SC is defined via a set of properties, if fulfilled then the system can provide trustworthy service
in terms of protecting data in use and software confidentiality. However, hard technologies still
cannot provide full protection. Thus, a mutual trust establishment procedure is presented. It
employs the properties of SC, which are: Sealing, Attestation, Black-box, and Tamper-resistance,
along with the Auditor role. The procedure is designed to take out the need to trust a single entity
in the system. It is created for an abstract general architecture of cloud-based applications that
includes Data providers and consumers, providers for cloud infrastructure, SW application, and
SC container, in addition to the auditing party. The attacker assumption restricts the number of
parties that may act maliciously simultaneously. Although this part of the thesis is theoretical, it
presents strong preliminary guidelines for designing systems considering data privacy at an early
stage, Privacy by Design (PbD). Moreover, several commercial technologies exist which can be
viewed as implementations of the SC concept, to some extent, such as Intel’s Software Guard
Extensions (SGX) and the Hardware Security Modules (HSMs).
Then, the notion of SC is extended in a hybrid model to solve the Secure Multiparty Computation
resiliently. The designed model is a distributed system of trusted modules (SC modules) in addition
to a set of untrusted processes representing the parties who are interested in computing a share
function. The model uses reliable broadcast primitives to exchange values among trusted modules.
We show that the model fulfills the SMC properties with considerable resilience and availability
under the attacker assumptions.
Finally, taking the PbD into account, we study a use case that employs the SGX as an approxima-
tion(instance) of the SC concept in a smart home application for healthcare service. Unsurprisingly,
additional protection comes at a cost. Therefore, a proof-of-concept is implemented to understand
the overhead of utilizing SGX in such a data-centric system. We hope that the study provides
systems developers with a manifestation of the substantial impact on performance to balance and
reduce the security-performance trade-off. That can be achieved by a careful design decision taking
the requirements of data operations in their systems into account along with allowing for utilizing
the SC properties for stronger protection.
Föderierte medizinische Forschungsdatenbanken: Architekturen, Datenintegration und Abfragelogik
(2021)
Elektronische medizinische Daten werden heute neben der medizinischen Versorgung der Patienten auf vielfältige Weise genutzt, insbesondere in der Forschung. Im Rahmen der personalisierten Medizin werden jedoch immer größere Datenmengen benötigt, um aussagekräftige Analysen durchführen zu können. Die offensichtliche Lösung, die Daten von mehreren Standorten zu kombinieren und zu analysieren, bringt jedoch eine Reihe von Herausforderungen mit sich. Drei davon werden in dieser Dissertation behandelt: Architekturen, Datenintegration und Abfragelogik.
Eine wichtige Grundlage bilden Softwarearchitekturen, in denen Daten datenschutzkonform gespeichert und analysiert werden können. In den vergangenen Jahren hat die Bedeutung der Zusammenarbeit über Standorte oder sogar Konsortien hinweg zugenommen. Diese Dissertation liefert zwei Beiträge zur Übersetzung von elektronischen Kohortenabfragen und ihrer Integration in föderierte Forschungsarchitekturen. Auf diese Weise werden Forschungsnetzwerke, selbst wenn diese auf unterschiedlichen Technologien basieren, für verteilte Kohortenabfragen interoperabel.
Die Heterogenität medizinischer Daten, die an verschiedenen Orten generiert wurden, stellt eine Barriere für die vernetzte Forschung dar. Mit dem Ziel, diese zu überwinden, wird in der Dissertation eine Methode vorgestellt, die ein halbautomatisches Mapping und eine Zusammenführung von heterogenen Datensätzen mithilfe eines lexikalischen Ansatzes ermöglicht.
Für die Analyse medizinischer Daten wurden in den letzten Jahren verschiedene benutzerfreundliche Abfragewerkzeuge entwickelt. Diese können jedoch Einschränkungen hinsichtlich der umsetzbaren Abfragekomplexität aufweisen. In einer weiteren Arbeit dieser Dissertation wird daher untersucht, inwieweit die derzeit üblichen deklarativen Methoden bei elektronischen Kohortenabfragen durch prozedurale ergänzt werden können. Abschließend befasst sich der letzte Beitrag dieser Dissertation mit der Modellierung von temporalen Beziehungen in Kohortenabfragen und stellt hierfür einen neuen grafischen Ansatz vor.
Zusammenfassend zeigen die Ergebnisse dieser Dissertation, dass die Zusammenarbeit in einzelnen oder zwischen mehreren Forschungsnetzwerken durch die intelligente Verarbeitung von elektronischen Kohortenabfragen weiter verbessert werden kann.
Data stream systems support queries on continuously arriving data. They provide similar query facilities like relational database systems. However, data stream systems continuously evaluate the queries on the arriving data and discard the data afterwards. Thus, it is possible to process high volumes of rapidly arriving data. Application examples are the monitoring of IT infrastructure or the processing of data from wireless sensor networks in wildland or animal surveillance scenarios. Compared to centralized data stream systems, distributed data stream systems can lower the resource demand, improve the performance and increase the lifetime of wireless sensor networks. This is particularly true if the data sources are already distributed and the hosts of the data sources take part in query processing.
In my dissertation, I investigate the interdependencies of logical query optimization and the assignment of operators to hosts for distributed data stream systems on heterogeneous hosts. In particular, I discuss the mathematical representation of selected optimization goals and constraints like resource limits for cost-based query optimization. I propose a technique to estimate whether logical query optimization steps may interfere with the subsequent assignment of operators to hosts. Well-known heuristic algorithms are adapted to the optimization problem of assigning operators to hosts. An evaluation compares the different algorithms. Moreover, I propose an algorithm for load balancing by multiple instantiation of operators.
The inflationary use of currently prominent terms, such as data centration or the digital age, without the effort of naming and explaining how the phenomena they refer to are to be understood and by which characteristics they are characterized, does nothing more than obscure meaning without any chance of gaining insight or well-founded opinion. Algorithms, data, digitization, and are three of the candidates that are frequently used but only rarely explained in a well-founded and differentiated manner. This article comes into this desideratum. It deals with these 'three unknowns' and attempts to grasp and substantiate the terms and the phenomena they refer to and to formulate what exactly is 'new' and what constitutes the quality of difference to previous states (age without digitization.
In this thesis, we investigate different possibilities to protect the Android ecosystem better. We focus on protection mechanisms for application developers, and present modern attacks against sandbox-protected applications and the developer’s intellectual property, ultimately providing enhanced approaches for defense against these attacks. Our defensive approaches range from runtime-shielding measures to analysis-impeding obfuscation mechanisms.
First, we take a closer look at communication possibilities of sandboxed applications on Android, namely the UI layer and Android’s inter-process communication. We introduce attacks on applications working through the actors on Android’s UI, starting with overlay windows, accessibility services, input editors, and screen captures. Android’s inter-process communication is the second attack avenue we pursue. It is the primary means of communication for apps to interact with each other despite being sandboxed by the Android system. We show through assessments of the Google Play Store and third-party app stores that attacks on these mechanisms pose a blind-spot in current attack models considered by developers. To provide relief we introduce new protection mechanisms that developers can implement and enhance testing methodologies to consider these attacks in the future.
Second, we direct the reader’s attention towards attacks on the developer’s intellectual property. Due to Android’s open-source nature and openly communicated standards, a trend of repackaging popular applications with malicious enhancements and republishing the malicious app has rooted itself in the malware community. To counteract this development, we present an enhanced centroid-based approach at clone detection and improved analysis-impeding obfuscation mechanisms that build on virtualization-based obfuscation. Our obfuscation approach works on Android’s current runtime environment, as well as the previously employed ‘Dalvik virtual machine’, and can be used to obfuscate critical portions of an application’s functionality against prying eyes. To make valid assumptions about the strength of virtualization-based obfuscation, we conduct a de-obfuscation study on the more mature x86/x64 platform, developing a reverse engineering approach for virtualization-obfuscated binaries.
We analyzed several hundred thousand Android applications during our research with automated approaches and several thousand apps with manual analysis, always opting for a responsible disclosure process of found vulnerabilities by providing developers with at least three months’ due notice before attempting a publication. The tools presented in this thesis are open-sourced under the MIT license, to help in the inclusion of development projects and their extension or further development. With the insights gained through the research for this thesis, we hope to provide developers with the tools and testing approaches they need to make the Android ecosystem more
secure and safe.
Inner source (IS) is the use of open source software development practices and the establishment of open source-like communities within an organization. The organization may still develop proprietary software but internally opens up its development. IS promises to resolve problems of traditional software development by easing software reuse and enabling parties within an organization to collaborate across organizational boundaries.
However, it is unclear what elements constitute IS (problem I) and how to measure the presence and magnitude of IS collaboration (problem II). The large majority of research articles on IS to date are limited to qualitative results regarding IS. There are yet no quantitative studies on IS collaboration exploring how much IS collaboration takes place or how IS practices affect it (problem III).
We followed a three-phase research approach to address these problems. First, we performed an extensive literature survey and analyzed 43 IS publications. We found that four key elements constitute IS (shared cultural values, open development environment, communities around software, IS-specific scenarios) but that IS programs and projects differ on at least five dimensions (addressing problem I).
Second, we developed the patch-flow method (and a software tool implementing it) for measuring IS collaboration. Patch-flow is the flow of code contributions across organizational boundaries ("silos") such as organizational unit or cost center boundaries. We evaluated the method using case study research with a non-trivial industry organization and found it to be viable and useful to practitioners (addressing problem II).
Third, we performed a multiple-case case study with three large software organizations running a total of five IS program. We identified the used IS practices and the resulting patch-flow. We found patch-flow to exist in all organizations but that only fraction of all code contributions to IS projects constitute patch-flow. We observed that the number of IS practices implemented correlates with the distance of parties involved in collaboration. This indicates that IS is particularly suited to enable collaboration between parties of high distance in an organization (addressing problem III).
This thesis delivers a holistic definition of IS and the first classification framework for IS programs and projects. Researchers can use such a framework to reason about generalizability of their results more precisely. The patch-flow measurement method is the first of its kind to measure and quantify IS collaboration and can serve as a base for further quantitative analyses of IS collaboration. The exploration of the patch-flow in the three industry cases can serve as example and benchmark for practitioners.
The Internet of Things (IoT) brings comfort into the life of users. It is convenient to control the lights at home with an app without leaving the couch or open the front door with a remote control. This comfort, however, comes with security risks as the wireless communication between components often relies on proprietary protocols. Such protocols are designed under size and energy constraints whereby security is often only a secondary factor. Moreover, even when a default protocol such as IEEE 802.11 WLAN with enabled encryption is used, mobile devices such as smartphones can be located threatening the location privacy of users.
This thesis is divided into two main parts. In the first part, we demonstrate how to passively locate a smartphone indoors using IEEE 802.11 WLAN and contribute a geolocation system with a mean accuracy of 0.58m. Subsequently, we analyze how a company can incentivize users with different levels of privacy-awareness to connect to a provided WLAN and give up their location privacy in exchange for certain benefits such as shopping discounts. We model this situation as a Bayesian Stackelberg game to find the company's best strategy.
In the second part, we showcase the challenges that arise for security researchers when investigating proprietary wireless protocols. Software Defined Radios (SDRs) propose a generic way to analyze such protocols operating on frequencies like 433.92 MHz or 868.3 MHz where no default hardware such as a WLAN stick is available. SDRs, however, deliver raw signals that have to be demodulated and decoded before researchers can reverse-engineer the protocol format.
Our main contribution to this process is an open source software called Universal Radio Hacker (URH) which is, to the best of our knowledge, the first complete suite for wireless protocol investigation with SDRs. URH splits down the protocol investigation process into the phases Interpretation, Analysis, Generation and Simulation.
The goal of Interpretation phase is identifying the transmitted bits and bytes by demodulating the signal. Apart from letting users manually adjust demodulation parameters, we contribute a set of algorithms to automatically find these parameters and integrate them into URH.
In Analysis phase, the protocol format is reverse-engineered from the demodulated bits. This is a time-consuming manual process that slows down a security analysis. To address this problem, we design and implement a modular system that automatically finds protocol fields such as addresses and checksums. In combination with the automatic modulation parameter detection this speeds up the security analysis of unknown wireless protocols.
URH enables researchers to perform attacks on stateless and stateful protocols in the Generation and Simulation phase, respectively. In Generation, users can apply a fuzzing to arbitrary data ranges while the Simulation component of URH models protocol state machines and dynamically reacts to incoming messages from investigated devices. In both phases, the software automatically applies modulation and encoding to the bits that should be sent. We demonstrate three attacks on IoT devices that were found and executed with URH. The most complex attack involves opening an AES protected wireless door lock in real-time.
The steadily advancing trend towards multi- and manycore computing architectures bears enormous challenges for developers of application software. To be able to make efficient use of the raw parallelism provided by the hardware, programs must explicitly cater for that fact. The classic programming model of a multithreaded application process, which consists of a number of control flows (threads) managed and scheduled by the operating-system kernel within a shared address space, is being increasingly stretched to its limits: on the one hand, creating threads and switching between them is not sufficiently lightweight; on the other hand, structuring a parallel application around threads is often cumbersome and puts needless obstacles in the programmer’s way.
A suitable alternative to multithreaded programming is the use of a so-called concurrency platform that supports developers in articulating applications as a conglomeration of fine-grained concurrent activities. Concurrency platforms come with a runtime system that is responsible for dispatching the lightweight work packages to the available computing resources. Such runtime systems generally build upon the abstractions provided by an underlying commodity operating system such as Linux – that is, upon threads as abstractions of processor cores. This construction results in a number of disadvantages: for instance, the operating system’s scheduler acts without consulting the runtime system, thus making decisions that are potentially unfavourable from the application’s point of view; the coexistence of multiple parallel application processes causes problematic reciprocal interference; blocking system calls cause a temporary loss of parallelism.
This thesis presents AtroPOS, the design of an atrophied parallel operating system that is specially geared towards supporting concurrency platforms on manycore systems. AtroPOS is a derivative of the OctoPOS operating system and has undergone comprehensive further development; it rests on the paradigm of invasive computing and adopts its fundamental concepts: resource-aware programming, exclusive allocation of processor cores to applications, tailoring and dynamic reconfigurability. The operating-system kernel provides a boiled-down set of essential low-level abstractions on top of which arbitrary runtime libraries can be placed. InvRT, the invasive runtime system that supports executing applications of invasive computing, was developed as a reference runtime library.
By default, AtroPOS makes the existing physical processor cores directly available to the application; their virtualisation is strictly optional and there is no notion of threads. The scheduling of user control flows is carried out purely on the user level by the runtime system without involving the operating-system kernel; this allows for the efficient handling even of very fine-grained concurrency within the application. System calls that may block within the kernel have asynchronous invocation semantics and return immediately upon blocking so that loss of parallelism during the waiting time is ruled out by design. Notification of completed system operations is carried out by means of a generic mechanism that passes user-defined data structures upward to the application and can be used by the runtime system to construct arbitrary synchronisation data structures such as futures. The same versatile mechanism is harnessed on tiled computing systems to allow parts of a distributed application to communicate with one another.
In addition, AtroPOS offers configurable vertical isolation: the strict separation of the operating-system kernel from the application can be enabled and disabled in a coarse- and fine-grained manner, and both statically and dynamically. With this, type-safe applications can issue system calls as ordinary function calls and thus lower their direct and indirect costs.
The aforementioned concepts were implemented in the AtroPOS kernel and the InvRT runtime system in the context of this thesis; they were evaluated with the aid of micro-benchmarks and various application suites. Moreover, the runtime library of the parallel programming language Cilk Plus – an extension of C/C++ – was ported to the AtroPOS interface in order to showcase the versatility of the approach.