Department Informatik
Refine
Year of publication
Document Type
- Doctoral Thesis (91)
- Report (17)
- Article (3)
- Moving Images (3)
- Working Paper (2)
- Conference Proceeding (1)
- Habilitation (1)
- Other (1)
Has Fulltext
- yes (119)
Keywords
- Maschinelles Lernen (7)
- Hochleistungsrechnen (6)
- Parallelisierung (5)
- Bildverarbeitung (4)
- Computertomographie (4)
- Machine Learning (4)
- Medizinische Bildgebung (4)
- Mehrkernprozessor (4)
- Simulation (4)
- - (3)
In the context of the United Nations Paris Climate Agreement of 2016, the majority of the global leading automotive manufacturers are committed to electrifying their fleets. A particular challenge in achieving this transformation is the efficient and economical development of new types of battery systems to meet the high customer requirements for electric range and fast-charging capability as well as legally required safety standards. These requirements must be guaranteed over the entire vehicle lifetime. However, the battery ages over time due to electrochemical degradation effects during operation. As a consequence, the battery state needs to be continuously monitored and analyzed. Thereby, new ways of analysis are required, as the current characterization of the battery state during maintenance is associated with high financial efforts, time-consuming measurement procedures, and is limited to a low number of available test capacities.
An innovative and scalable alternative is offered by deploying battery models. In this context, this thesis addresses the research question to what extent battery-electric modeling is applicable to determine the battery state, using only in-vehicle operating data. To this end, this approach is divided into two research areas: First, the modeling of current electric battery behavior based on in-vehicle data, and second, the methodology for analyzing the battery state.
While conventional battery models are mainly based on physical system representations, this thesis focuses on novel data-driven methods that are able to independently learn relevant correlations from vehicle operational data and to use this information to continuously update the battery model. In a preliminary analysis, artificial neural networks with a sliding window approach proved to be a suitable candidate to learn the electric battery behavior during operation.
In terms of the methodology, the analysis of the battery state is considered separately at cell-level and system-level due to the high complexity of the battery behavior and the possible operating conditions inside electric vehicles. The development and evaluation of the methodology for battery state determination are carried out at cell-level by using extensive operating and testing conditions. In particular, pulse tests and incremental capacity analysis achieved high accuracies. However, the results of this work also show that the cell-level methodology cannot be directly extrapolated to system-level battery behavior due to systematic and statistical uncertainties.
Apart from the current limitations of neural battery models, various optimization potentials in the area of the training process and data preprocessing are identified. In conclusion, the obtained findings provide an outlook for further applications in the context of data-driven battery analysis.
The global prevalence of various metabolic pathologies, such as the metabolic syndrome, obesity, diabetes or nonalcoholic fatty liver disease (NAFLD) has increased steadily in recent decades, and takes on the proportions of a worldwide epidemic. It is therefore important to understand background and pathogenesis of these disorders to reverse this trend. In the last years, research in Magnetic Resonance Imaging (MRI) focused on the extraction of quantitative biomarkers from qualitative MR image data, which allows to quantitatively estimate relaxation properties or signal ratios in the form of parameter maps. Quantitative MRI (qMRI) techniques are accompanied by the promise of earlier disease detection and better grading and staging of diseases. Since MRI is a non-invasive and non-ionizing modality, it allows large-scale research studies and clinical assessment. Widespread clinical adoption of qMRI for the investigation of the described disorders is still limited. Conventional methods suffer from lengthy scan times and motion sensitivity, since a number of images have to be acquired and since often Cartesian sequences are applied. In the abdomen this is challenged by respiration, which is why data is typically acquired during breath-holding. Moreover, currently applied biomarkers are limited in terms of their significance for the prediction of disease states.
This work includes techniques for the MRI-based estimation of a new biomarker called triglyceride saturation state, which potentially gives more insights into the pathogenesis of metabolic abnormalities or the risk of breast cancer development. To this end, two methods are proposed that calculate 3-D parameter maps of the relative fractions of saturated, mono-unsaturated and poly-unsaturated fatty acids with regard to the total fat content. Both techniques efficiently sample Cartesian multi-echo gradient-echo (GRE) data using bipolar readout gradients and apply low-rank denoising before parameter estimation. Phase errors are either addressed analytically using an eddy-current phase parameter, or by applying novel echo-dependent phase maps. In vitro, the developed methods yielded accurate and reproducible results across different protocols. In vivo, applicability was assessed using measurements in healthy volunteers and patients in the clinical sites abdomen and breast.
Moreover, two methods that allow proton-density fat fraction (PDFF) and R2* quantification during free-breathing are proposed. A motion-robust radial stack-of-stars sequence is applied, and iteratively reconstructed using a model-based technique that combines Parallel Imaging (PI) and Compressed Sensing (CS). The developed approaches are capable of estimating self-gated respiratory motion-averaged and motion-resolved parameter maps from undersampled as well as fully-sampled data. Compared to a conventional state-of-the art technique, the motion-averaged reconstruction achieved accurate PDFF values, and the motion-resolved reconstruction accurate PDFF and R2* results in n = 14 patients enrolled in a hepatobiliary research protocol. For patients unable to suspend respiration, the developed methods are promising alternatives to conventional techniques, which typically apply motion-sensitive Cartesian sequences and sample data during breath-holding.
In summary, in this work relevant contributions are made to the fields of fat quantification, fatty acid composition calculation and R2* estimation. The proposed techniques address currently existing limitations in the field of qMRI for imaging various disorders in the abdomen.
Query processing is a traditional yet still ongoing field of research. Its significance is derived from the increase of data created and processed every day and the opportunities provided by the analysis of the data. In today’s world, complete businesses are built on top of sophisticated data processing capabilities. However, with the increase of data, processing these huge amounts of data becomes more and more challenging. This is not only because of the time and resources it takes to process such amounts of data but also due to the energy costs. Consequently, researchers broadened the range of processing architectures to investigate for query processing beyond traditional processor-based systems. Next to programmable graphic processing units (GPUs), field programmable gate arrays (FPGAs) have become of great interest due to their unique features. FPGAs not only allow for the construction of highly optimized hardware circuits for specific tasks but also enable the adaption of hardware to the tasks during runtime. Hence, many researchers have presented various proposals to exploit the features provided by FPGAs. Although the proposed systems can achieve high throughput and efficiency in general, they are often not able to accelerate queries that haven’t been considered during their design. Performance and efficiency can only be gained best through specialization, and thus, a system should adapt to an incoming and unknown query. This is possible with FPGAs due to their ability to be reconfigured fully or in parts during runtime. However, this comes at the cost of high startup times as the FPGA has to be configured according to the query prior to the execution of the query. Furthermore, it is almost impossible to generate hardware configurations for every possible query.
This thesis introduces an innovative FPGA-based near-data processing system able to process a wide variety of queries at I/O-rate (line-rate). It is based on reconfigurable and parametrizable accelerators. The accelerators are composed of parametrizable modules within a library. These modules do not only implement a specific operator for a specific type but are optimized to implement operators for multiple types or even multiple functions without a drastic increase of resources. Another contribution of this thesis is the concept of optimistic query processing for demanding operators such as the join and regular matching operator. It is based on the idea to approximately filter as much data as possible in hardware without removing tuples that should be kept. The resulting, often reduced, intermediate table is guaranteed to be a superset of the accurate filter operation. A software-based operator implementation can then be applied on an intermediate table with less tuples to finalize the operation. As an example, the implementation of a module for regular expression matching is presented.
Equipped with a parameter sequencer, accelerators assembled from this library are able to implement a greater variety of queries by setting the parameters of the modules according to the query to process. However, the schema which the tables are stored in also influences the design of the accelerator and, therefore, may limit the types of queries it can implement. For this, a hardware unit called ReOrder is introduced. It decouples the table schema and storage layout from the accelerator enabling all accelerators to be used on every table with row-oriented and column-oriented storage layouts. Even though the developed accelerators are able to implement a wide variety of queries, no one-fits-all accelerator is possible. Consequently, the system is designed to concatenate multiple partially reconfigurable (i.e., exchangeable) accelerators without a decrease in tuple throughput. This increases the types of queries that can be processed even further. As accelerators might not use all available resources within a partially reconfigurable FPGA region, the idea of in situ statistics generation is proposed. In situ statistics modules can utilize the free resources to gather information on the table that is processed by an accelerator without additional costs in terms of time.
Complementary to the hardware related parts already mentioned, a control software managing the execution of a query on such a system is presented as well. Starting from the basic components needed to execute queries on the platform, the description goes into depth on the particularities of such a FPGA-based query execution system. Especially the query placement problem which describes the problem of finding a query-specific-configuration of the system’s hardware according to an incoming query is formulated. In addition, the challenges to obtain an optimal placement is discussed and exemplified using the problem of buffer assignment. Afterwards, the parameters of the modules have to be generated. In this regard, an algorithm to obtain the parameters for a ReOrder unit is presented and evaluated in depth. Additionally, considerations about parameter generators for a histogram module and the optimistic regular expression matching module are provided.
Finally, an implemented prototype of the system called ReProVide unit has been evaluated. It is able to provide I/O-rate processing of simple as well as complex queries. Compared to a software-based in-memory database system executed on an ARM processor, queries could be executed 19.9× faster on the prototype on average. When executed on an x86 processor, comparable execution times have been observed. This means the prototype system storing the tables on two solid states drives was able to process queries as fast as an x86 system holding the tables in memory. Furthermore, the prototype built is shown to be very energy-efficient, consuming only less than 25% of the energy consumed by the x86 system on average.
Minimally invasive procedures leverage X-rays for online diagnostics and planning, device navigation, and confirmation of successful patient treatment. Although these interventions yield better outcomes than open surgeries at a lower risk, X-rays introduce considerable health concerns. Prolonged irradiation of the same skin area results in skin rashes, hair loss, and even ulceration. Besides, every exposure to X-rays entails the stochastic risk of developing some form of cancer, deeming radiation dose management mandatory. Radiation protection builds on two fundamental pillars -- dose monitoring and avoidance. Unfortunately, current dose tracking systems used in a clinical environment simplify the actual physics, and particularly scattered radiation, to meet real-time requirements in the interventional suite. A common counter measure to scatter is the use of an anti-scatter grid. While it reduces X-ray scatter in the image, it can increase the overall dose. To improve the situation, efficient methods to quantify scatter are desirable since they yield a better dose estimate. They can also facilitate removal of the anti-scatter grid, hence, reducing X-ray dose. This thesis investigates approaches to integrate prior knowledge into neural networks to speed up the dose and scatter estimation while maintaining physical plausibility.
We integrated a Monte Carlo simulation toolkit with digital twins of interventional X-ray systems and patients. A novel filtering-based technique allows for a quick comparison of computational and experimental dose studies. Using this framework, we cross-validated simulations with empiric measurements. We found considerable deviations induced by tissue-equivalent plastics and improper phantom registration.
As part of this framework, we developed a novel method comprising a comprehensive patient organ model, a fast primary X-ray fluence simulation, and a convolutional neural network to refine this simulation. Patient-specific absorption and density maps guide the network in mapping the primary fluence to primary and scatter dose distributions. Our method generalized well to unseen patients and anatomic regions in a computational study on skin dose and only slightly lost accuracy compared to the Monte Carlo simulation. However, we achieved these results in a fraction of the time, less than a second. The results also encourage investigating organ dose estimation.
For the case when only a patient-shape model is available, we extended the conventional skin dose formalism by a learning-based backscatter estimator. It infers a latent scatter representation from X-ray images, which encode the patient's anatomy. In order to improve the physical plausibility, both X-ray projection scatter and backscatter are calculated simultaneously. Our experiments revealed that both scatter distributions were accurately estimated. Also, a comparative study showed the superiority of the multi-task approach over the conventional and single-task methods. The accuracy is comparable to the previous approach at a much higher computational efficiency.
We also studied how to enforce low-frequency properties during learning-based X-ray projection scatter estimation to rule out corruption of relevant high-frequency information. As part of this work, we extended a shallow convolutional encoder by multivariate B-spline evaluation. This way, we decreased the parameter and runtime complexity by orders of magnitudes while being on par with a state-of-the-art neural network. Moreover, we demonstrate data integrity and robustness to unseen noise. A phantom study involving cone-beam computed tomography images yielded encouraging results hinting at a potential clinical application of the proposed method.
In high‐performance computing, block‐structured grids are favored due to their geometric adaptability while supporting computational performance optimizations connected with structured grid discretizations. However, many problems on geometrically complex domains are traditionally solved using fully unstructured meshes. We address this deficiency in the two‐dimensional case by presenting a method which generates block‐structured grids (BSGs) with a prescribed number of blocks from an arbitrary triangular grid.
Each block in a BSG contains a structured grid. Vertex repositioning is one of the key operations for adapting a structured grid to a domain. We present an algorithm called discrete mesh optimization (DMO), a greedy approach to topology‐consistent mesh quality improvement. The method optimizes vertex positions according to a user defined quality metric. It is easily adaptable to any mesh and metric as it does not rely on differentiable functions. We give examples for triangle, quadrilateral, and tetrahedral meshes and for various metrics. We show that DMO outperforms other state of the art methods in terms of convergence and runtime.
We generate BSGs consisting of quadrilateral blocks which contain structured triangle grids. This design combines advantages of triangular and quadrilateral meshes. A robust method for generating a prescribed number of quadrilateral blocks is required for that. We compare different methods for indirect quadrilateral mesh generation and show that a combination of those methods delivers good performance and high quality results. We show the efficiency of our hybrid approach on ocean meshes with up to several million triangles.
Our research on BSG generation has shown that Ocean domains are often too complex to be represented accurately by BSGs. We extend our method by masking elements, which enables accurate representation of fractal boundary shapes as we can resolve geometry on a high granularity. The performance of the BSG generation is evaluated on grids constructed for regional ocean problems utilizing two‐dimensional shallow water equations.
Weight-bearing C-arm cone-beam computed tomography (CBCT) of the knees is an imaging technique that can be applied to acquire information about the structures of the knee joint under natural loading conditions. An analysis of the knee joints under load can provide valuable information about the disease progression of patients suffering from osteoarthritis (OA). OA is a degenerative disease that causes breakdown of soft tissues like articular cartilage leading to severe pain especially during movement. Weight-bearing CBCT is beneficial compared with standing 2-D radiographs traditionally applied for OA diagnosis, as it provides 3-D information about the structures of the knees. However, this acquisition technique also poses some challenges which are addressed in this thesis, including detector saturation, motion artifacts owing to postural sway during the scan, and the analysis of the resulting reconstructions.
During weight-bearing CBCT acquisition, the C-arm rotates on a horizontal trajectory around the knees. The high X-ray dose required in lateral views to penetrate through both femurs leads to detector saturation in less dense tissue regions. To address this issue, an approach for preventing detector saturation in the analog domain is presented. It non-linearly transforms the intensities using an analog tone mapping operator (TMO) before digitization in order to increase the dynamic range of the detector. Furthermore, a second approach is described that replaces saturated regions in the projection images with information obtained from an additional lowdose scan. A marker-based 3-D non-rigid alignment step makes the approach robust to subject motion between scans. Both approaches lead to improved image quality in simulations, and a clinical evaluation confirms the feasibility of the second approach.
The swaying motion of naturally standing subjects during weight-bearing CBCT acquisition leads to blurring and streaking artifacts in the reconstructions. To correct these artifacts, an inertial measurement unit (IMU)-based rigid and non-rigid motion compensation is developed and evaluated in a simulation study using the recorded motion of real standing subjects. The approaches lead to improved image quality on optimal simulations. A noisy signal simulation reveals the limitations of the approach towards an application in real acquisitions. A subsequent phantom study shows that additional motion is induced by the vibration of the C-arm during the scan, which can not be measured by the IMU attached to the legs of the scanned subject.
Afterwards, the thickness analysis of tibial cartilage is addressed. First, an analysis of manual segmentations of the tibial cartilage surface of multiple raters is performed showing that low-pass filtered single-rater segmentations are more similar to the consensus of multiple raters than the original segmentations. Furthermore, as a fast and repeatable alternative to manual segmentations, an automatic convolutional neural network (CNN)-based approach for cartilage surface segmentation is developed. As there is no standard measure for cartilage thickness in literature, the results of four cartilage thickness measures are compared revealing their similarities and differences. A subsequent evaluation of the change in cartilage thickness over time supports the expectation that lateral cartilage thickness decreases under load.
This thesis provides valuable tools for the pipeline aimed at analyzing cartilage in OA. The presented methods contribute to the improvement of data acquisition and processing in weight-bearing CBCT and pave the road for the evaluation of clinical data through a detailed and thorough analysis of all described processing steps.
Standortbezogene Unterhaltung ist mittlerweile zu einem Grundbedürfnis geworden. Die erforderliche Genauigkeit und Zuverlässigkeit von Lokalisierungssystemen wächst nicht nur für intelligente Systeme wie selbstfahrende Fahrzeuge, Lieferdrohnen und mobile Geräte, sondern auch für alltägliche Fußgänger. Aufgrund der allgegenwärtigen Sensoren wie Kameras, GPS und Trägheitssensoren werden mit aufwendig handgefertigten Modellen und Algorithmen eine Vielzahl von Lokalisierungssystemen entwickelt. Um eine Einschränkung der freien Sicht und unterschiedliche Lichtverhältnisse von Kamerasystemen zu vermeiden werden typischerweise Funk- und Trägheitssensoren zur Lokalisierung verwendet. Unter idealen Laborbedingungen können diese Sensoren und Modelle, Positionen und Orientierungen langfristig genau abschätzen. In realen Umgebungen wirken sich jedoch viele Probleme wie ungenaue Systemmodellierung, unvollständige Sensormessungen, Rauschen und komplexe Umgebungsdynamiken auf die Genauigkeit und Zuverlässigkeit aus. Individuell betrachtet haben Funk- und Trägheitssensoren Schwierigkeiten: Funk lokalisiert aufgrund mehrerer Pfade durch statische oder dynamische Objekte entlang der Ausbreitungspfade zwischen Sender und Empfänger sehr ungenau. Im Gegensatz dazu akkumulieren Trägheitssensoren im Laufe der Zeit Entfernungs- und Orientierungsfehler und können keinen absoluten Bezug zur Weltkarte herstellen. Verfahren des Stands der Technik ergänzen beide Sensoren, um komplementäre Effekte zu verwenden, können jedoch die Schwierigkeiten nicht beheben. Darüber hinaus können sie mit einfachen Bewegungsmodellen wie konstanter Beschleunigung oder Geschwindigkeit keine stark nichtlinearen menschlichen Bewegungen beschreiben.
Das Hauptziel dieser Arbeit ist es daher, die Auswirkungen datengetriebener Methoden und verschiedener Sensordatenströme von lose platzierten Sensoren auf die Genauigkeit der Schätzung menschlicher Posen in hochdynamischen Situationen zu untersuchen. Die absolute Genauigkeit der erhaltenen Ergebnisse wird mit Filtermethoden nach dem Stand der Technik verglichen. Um die Probleme von Menschen entworfenen Lokalisierungsmodellen zu lösen, werden in dieser Arbeit maschinelle und tiefe Lernmethoden verwendet. Es werden Lernmethoden zur Positions‐, Geschwindigkeits- und Orientierungsschätzung sowie zur Rekonstruktion der Trajektorie unter Verwendung multimodaler Messungen von Funk- und Trägheitssensoren vorgestellt, um eine genaue und robuste Lokalisierung zu erreichen. Die Auswirkungen datengetriebener Verfahren entlang einer typischen Verarbeitungskette für die Lokalisierung mit Funk- und Trägheitssensoren werden untersucht. Die Verarbeitungskette ist lose gekoppelt in atomare Komponenten unterteilt, sodass jedes datengetriebene Verfahren problemlos ausgetauscht werden kann. Sequenzbasierte Lernmethoden werden entlang der Verarbeitungskette verwendet, um absolute Positionen aus Ankunftszeitstempeln von Funksignalen mit Mehrwegeausbreitung zu schätzen, ungerichtete Geschwindigkeitsvektoren von Trägheitssensoren zu schätzen, Bewegungsmuster zu klassifizieren, die die Ausrichtung der Trajektorie kalibrieren und um schließlich die einzelnen Komponenten zu einer Trajektorie zu fusionieren. Die vorgeschlagenen Methoden lernen, mit unterschiedlichem Bewegungsverhalten umzugehen und ermöglichen eine robuste und präzise Lokalisierung. Im Rahmen von Großstudien werden Mess- und Referenzdaten mit verschiedenen Bewegungsformen bei unterschiedlichen Geschwindigkeiten erfasst. Umfangreiche Experimente zeigen die Wirksamkeit und das Potenzial der vorgeschlagenen Methoden. Die datengetriebene, modulare Verarbeitungskette liefert genauere und robustere Schätzungen als bekannte Verfahren, auch bei dynamischen Bewegungen mit verrauschten Trägheitssensoren und Funkumgebungen mit Mehrwegeausbreitung.
Cyber-physical systems (CPS) security, as a prevalent concern in all
digital industries, must be implemented on different levels of
abstraction. For example, the development of top-down approaches,
e.g., security models and software architectures is equivalent in
importance to the development of bottom-up solutions like the design
of new protocols and languages. This thesis combines research in the
field of CPS security from both approaches and contributes to the
security models of the two lighthouse examples automotive software
engineering and general password security.
Most existing countermeasures against cyberattacks, e.g., the use of message
cryptography, concentrate on concrete attacks and do not consider the
complexity of the various access options offered by modern cyber-physical systems. This is
mainly due to a solution-oriented approach to security problems. The
model-based technique SAM (Security Abstraction Model) adds to the early
phases of (automotive) software architecture development by explicitly
documenting attacks and managing them with the appropriate security
countermeasures. It additionally establishes the basis for comprehensive
security analysis techniques, e.g., already available attack assessment
methods. SAM thus contributes to an early, problem-oriented and
solution-ignorant understanding combining key stakeholder knowledge. This
thesis provides a detailed overview of SAM and the resulting analyses of our
evaluation show that SAM puts the security-by-design principle into practice
by enabling collaboration between automotive system engineers, system
architects and security experts. The application of SAM aims to reduce costs,
improve overall quality and gain competitive advantages. Based on our
evaluation results, SAM is highly suitable, comprehensible and complete to be
used in the industry.
The bottom-up approach focuses on the area of password hardening encryption
(PHE) services as introduced by Lai et al.~at USENIX 2018. PHE is a password-based
key derivation protocol that involves an oblivious external crypto service
for key derivation. The security of PHE protects against offline brute-force
attacks, even when the attacker has full access to the data server.
The obvious evolution of PHE is the extension of the protocol to use multiple
rate-limiters (guardians) to mitigate the single point of failure introduced by
the original scheme.
In the second part of this thesis, a general overview of the motivation and
use cases of PHE is given, along with a new formalization of the protocol to
help the mentioned scalability and availability issues. Moreover, an implementation
of the resulting threshold-based protocol is briefly explained and evaluated. Our
implementation is furthermore tested and evaluated in a novel use case featuring
password hardened encrypted email.
Digital images and videos have taken an outstanding role in many areas of everyday life, e.g., for documentation and communication of events. However, the availability of sophisticated software applications makes it straightforward to realistically manipulate digital footage. This can entail detrimental consequences. The
goal of multimedia forensics is to provide as much information as possible on the
origin, history and authenticity of multimedia samples. Over the past two decades,
numerous successful algorithms have been proposed to address this goal. One of
the major contemporary challenges of multimedia forensics is to maintain algorithm
performance under strong lossy compression. Lossy compression sacrifices signal
fidelity for reduced bit rates, and is particularly widespread in online and mobile
applications.
In this dissertation, we present several contributions for robust forensic analysis of
strongly compressed images. First, we propose a taxonomy of existing multimedia
forensics algorithms, that categorizes approaches based on their relation to compression. We identify three major groups: The family of statistics-based algorithms
being impeded by compression, the family of compression-based algorithms that
are symbiotic to compression, and the family of physics-based approaches that are
largely insensitive to compression. Based on this categorization, we identify common
strengths and major limitations, as well as potential remedies. Second, we make
several algorithmic contributions in the group of physics-based and statistics-based
methods for robust and flexible forensic image analysis.
Arguably the main limitations of physics-based approaches are their restrictive
assumptions on scene composition, and often requiring manual annotations. Different
from other color-based works that solely model the illuminant color, we propose a more
descriptive forensic cue that jointly models the influence of in-camera processing and
illuminant conditions in a supervised fashion. We further propose a metric learning-
based extension of the color descriptor, that requires much weaker supervision, and
thereby is amenable to significantly larger training datasets, enhancing performance.
We show that our proposed descriptor is very robust against compression, and that it
outperforms state-of-the-art splicing detectors on low-quality images, without being
restricted to particular scene composition and without requiring user input.
One of the main limitations of statistics-based approaches is that they typically
strongly deteriorate in the presence of compression. To assess their real-world
applicability to compressed images, it is critical to evaluate algorithms on rigorous
realistic test cases. We argue that this is infeasible for camera identification using
existing databases, and propose a novel database to close this gap. Using this
database, we investigate the robustness of learning-based camera identification. We
present an approach that significantly outperforms the state-of-the-art in camera
identification, both on clean and strongly compressed images. We further find that
using compression as data augmentation can significantly improve performance on
compressed images, even for completely unknown compression algorithms.
In dieser Arbeit wird ein modellbasierter Testansatz für kooperierende autonome Systeme auf Basis farbiger Petri-Netze vorgestellt. Dazu werden zunächst verschiedene Modellierungssprachen zusammengetragen, die potenziell geeignet erscheinen, der hohen Komplexität derartiger Systeme gerecht zu werden. Anhand eines aufgestellten Kriterienkatalogs werden die Ansätze hinsichtlich ihrer Einsetzbarkeit bewertend verglichen. Aufbauend auf der Wahl farbiger Petri-Netze für die weitere Verfolgung des Ansatzes werden Überdeckungskriterien als objektiv messbare Testziele eingeführt und zueinander in Relation gesetzt. Die Generierung der Testfallmengen stellt dabei ein multi-objektives Optimierungsproblem dar. Während die strukturelle Überdeckung hinsichtlich eines ausgewählten Kriteriums zu maximieren ist, soll die Anzahl der Testfälle aus Gründen der Wirtschaftlichkeit möglichst niedrig gehalten werden. Zur Lösung dieses gegenläufigen Optimierungsproblems werden sowohl analytische als auch heuristische Verfahren vorgestellt. Der heuristische Ansatz wird dabei in Form von genetischen Algorithmen verfolgt. Die dafür notwendigen Operatoren werden in Bezug auf die gewählte Modellierungssprache der farbigen Petri-Netze detailliert beschrieben. Besondere Bedeutung kommt hierbei einer multiobjektiven Optimierung mittels Pareto-optimaler Lösungen zu, die es dem Anwender erlaubt, zwischen optimalen Lösungsalternativen auszuwählen. Die im Zuge der Arbeit entwickelten Ansätze und Werkzeuge werden schließlich an einer fiktiven Anwendung erprobt. Diese ist inspiriert durch das von der Europäischen Union und dem Bundesministerium für Bildung und Forschung (BMBF) geförderte Projekt „Resilient Reasoning Robotic Co-operating Systems“ (R3-COP). Zudem erfolgt eine Evaluierung der Güte der generierten Testfallmengen mittels eines modellbasierten Mutationstests.
With today’s technology, various non-invasive imaging methods provide detailed insight into the patient’s anatomy and support the physician during surgery. Depending on the signal type of each modality, the information collected is limited to narrow parts of the overall available information. A combination of several modalities makes it possible to increase the amount of information simultaneously made available to the physician. Especially a possible combination of the two most frequently used modalities, namely magnetic resonance imaging (MRI) and X-ray/computed tomography (CT) is of particular interest. Apart from the physical challenges, such a combination also poses a challenge to the acquisition scheme for a meaningful simultaneous acquisition with both modalities. This is especially true for the interventional environment, which implies additional requirements such as the real-time capability of the acquisition scheme. Such a combination, comes along with limitations for the signal recording schemes, up to the degree that an analytically correct solution can no longer be derived. Recently, there have been different attempts to overcome such limitations with machine learning. However, the solutions found are only partially comprehensible, and signal authenticity cannot be guaranteed. With the integration of prior knowledge about signal acquisition processes, such models can also be used in the medical field. In this thesis, we outline an acquisition scheme for a novel hybrid magnetic resonance (MR)/X-ray imaging system for the interventional environment and investigate the realization of parts of the scheme. Further, we investigate the concept of known operator learning, derive an implementation for operators in CT, and utilize the concept to enable the aforementioned hybrid MR/X-ray acquisition scheme. First, we investigate the possibilities and limitations for incorporating prior knowledge about the signal processing chain into machine learning algorithms. Using the universal approximation theorem (UAT), the benefits of mixing prior knowledge with neural networks are theoretically analyzed. Such a learning pipeline enriched with known operators allows the reduction of trainable parameters, and constrains the pipeline such that signal authenticity can be met, while the approximative power of deep learning (DL) can be utilized. In the course of the thesis, the concept is
implemented for the CT reconstruction and evaluated from a methodological and algorithmic point of view. The results show that CT operators can be integrated into neural networks allowing gradient flow through the whole pipeline. The experiments suggest that such mixed pipelines can be trained only with numerical data if designed specifically for the problem. Furthermore, an open-source software framework called PYRO-NN is developed to make these benefits accessible to a broader community. Second, we develop a novel hybrid MR/X-ray acquisition scheme for image-guided interventions. The scheme acknowledges the high contrast diversity of MRI and the benefits of fast, high-resolution X-ray imaging. The information provided by both modalities is captured in different domains, making a subsequent exploitation of the complementary signals difficult, and is identified as a major obstacle to fully benefit from the simultaneous hybrid acquisition. The proposed scheme results in multiple orthographic MR projections and, in one perspectively distorted X-ray projection per frame. In the course of the thesis, we first develop a novel tomographic conversion scheme to overcome the limitations of classical geometrical rebinning. The tomographic rebinning is derived and performed utilizing the known operator learning scheme to learn an efficient convolution-based algorithm for the conversion. The novel concept is first designed and comprehensively evaluated on the 1-D/2-D case against the baseline method. Subsequently, the tomographic rebinning concept is developed for the clinically relevant 2-D/3-D case. In the course of the experiments, we can show that the efficient convolution-based algorithm can be learned with purely numeric training data. The presented method overcomes the limitation of the baseline method and provides sharper and more distinct projection than the baseline method. Hence, the learned tomographic rebinning algorithm is promising for the proposed hybrid MR/X-ray acquisition scheme, and the underlying known operator learning concept encourages further integration of the acquisition and contrast parameters into the trainable pipeline. In general, the benefit of incorporating domain knowledge into neural networks is highly promising and is possible for various other tasks even beyond medical imaging.
Life Cycle Assessment (LCA) deals with factors that impact the environment, such
as the greenhouse gas potential. It considers the complete life cycle of a product or
service. In the first case, this means the analysis of raw material extraction, through
production and usage to end of life. The ISO norms 14040 and 14044 recommend a
precisely defined procedure: Goal and Scope Definition, Life Cycle Inventory Analysis,
Life Cycle Impact Assessment, and Interpretation.
So far, this instrument has mostly been applied only as a balancing method, in order to
determine the ecological footprint of an available product. In our further development of
this approach, we use the techniques of the mathematical optimization to influence the
design of the life cycle. This allows the improvement of the environmental properties for
both during development and existing products while at the same time including social
and financial considerations.
For Vitesco Technologies, this resulting optimized Integrated Life Cycle Sustainability
Assessment (ILCSA) is an important aspect of sustainability which goes beyond the
scope of LCA in the ordinary sense. We implemented a highly customizable mixed-integer
optimization model which provides the product manager guidance to choose,
for example, the locations for gaining the raw materials, production sites, and material
composition.
For its solution, we developed a software with a user-friendly interface and show the
benefit in a detailed case study at the example of a lithium-ion battery. It is a main part
of an electric vehicle and, therefore, it is of great significance in the current focus topic
of electromobility. Our method can be transferred to all other automotive components,
up to the optimization of the life cycle for the complete vehicle.
Our understanding of Parkinson’s disease (PD), its symptoms and diagnosis have been expanded considerably since its formal description by James Parkinson two centuries ago [Prze17]. However, this common neurodegenerative disorder has still been a threat to the health and well-being of patients and an economic burden, since a complete treatment is still a formidable barrier. Impaired gait is one of the most characteristic symptoms in Parkinson’s disease. Assessment of movement impairments forms a basis for diagnosis, evaluation of the disease progress and the evaluation of therapeutic interventions.
The emergence of wearable technologies has permitted the development of mobile systems for gait analysis. This technology enables us to record large amounts of patients’ data not only during clinical visits but also outside clinics. Data driven methods hold the potential to analyze the large volume of data to provide an objective disease assessment, improve current approaches to manage disease progression, and monitor patients outside clinics. This thesis aims to leverage data driven methods for the development of an objective gait assessment using mobile gait analysis systems.
The present thesis answers three main open questions in this domain: development and comparison of four widely used data driven methods for gait segmentation, analysis of turning for on-shoe wearable sensors and interpretable classification of motor impairments.
Regarding segmentation of gait sequence to individual strides, three existing segmen- tation methods are implemented and validated for PD population. For this applica- tion, a novel segmentation method is also introduced and implemented for the first time. These methods are evaluated on two data sets with different levels of data heterogeneity. This contribution presents a fair comparison of segmentation methods on an identical data set. Segmenting gait sequences is the first step in the following steps of research: turning analysis and assessing motor impairments in PD.
Further, turning deficits are examined using an on-shoe mobile gait analysis system. A method is introduced for isolation of turning from the whole gait sequence based on the statistics of turning angles between two consecutive strides. Correlation of turn-derived spatio-temporal features with two widely used clinical scales is examined. This is a proof-of-concept for the feasibility of using on-shoe mobile gait analysis systems for turning analysis in PD. Turn-derived spatio-temporal features, then, are used in the next contribution.
Finally, spatio-temporal features computed from straight walking as well as turning are used for the classification of different levels of motor impairments. Gaussian pro- cesses, a probabilistic machine learning method, is introduced for the first time for this application. The method provides the classification output as well as an explicit uncertainty measure, which captures the confidence of the method in the estimated output. The measure of uncertainty is particularly important in cases when the data set is small and noisy. A discussion regarding the properties of this type of data driven method and its evaluation is presented.
To conclude, the present thesis centers on the development of data driven methods for objective assessment of gait in Parkinson’s disease. The works mentioned above contribute to the early diagnosis, evaluation of disease progression, assessment of therapeutic interventions and insights for long-term monitoring of patients outside clinics. Understanding the potentials and pitfalls of data driven methods in gait analysis leads to deeper insight into Parkinson’s disease and opens new doors for the disease management.
As one of the most common types of heart arrhythmias, atrial fibrillation is a severe disorder of the heart rhythm affecting the left atrium. Among the most serious
complications count stroke and tachycardia mediated cardiomyopathy. Moreover, recurrent symptoms impair patients’ quality of life and functional status. Common
treatment options for rhythm control are antiarrhythmic drug therapy and
catheter ablation procedures. Cardiac ablation procedures are usually performed
minimally invasive in electrophysiology labs. During these procedures, ablation
catheters are navigated into the heart chamber via the venous system to ablate
specific areas involved in conduction of irregular impulses.
For treatment of paroxysmal atrial fibrillation, ipsilateral pulmonary vein isolation is a common ablation pattern. Interventional X-ray imaging is commonly employed for catheter guidance and control. Furthermore, electroanatomic mapping
systems can be used for treatment of complex arrhythmia. Ablation planning
data can be used during X-ray guided procedures as well as included in mapping systems to support the physician by supplying further context information.
In this thesis, artificial intelligence based methods for interventional treatment
of atrial fibrillation ablation procedures are presented. We developed an algorithm for automatic lesion planning targeted at pulmonary vein isolation procedures
for treatment of atrial fibrillation. This method facilitates a landmark-constrained non-rigid registration algorithm for accurate alignment of left atrium heart models.
Procedure planning data is generated for the individual patient anatomy to be superimposed during the ablation procedure. A quantitative and qualitative
evaluation of the algorithm was performed on clinical datasets. The accuracy of
the automatically generated ablation planning lines fulfilled clinical requirements. The mean error of 2.7 mm achieved implies a 29 % improvement compared to the state of the art algorithm. The qualitative evaluation showed full acceptance of the
automatically generated planning lines.
Another aspect investigated in this thesis is the optimization of individual fluoroscopic projection angles for X-ray guided cardiac procedures. We developed an
algorithm to estimate individual X-ray C-arm angulations based on pre-procedure planning information, taking individual patient anatomy into consideration. The C-arm angulations are optimized in respect to the orientation of the planning structure to minimize the foreshortening in the projection image. The mathematical framework can be applied for monoplane and biplane C-arm imaging systems.
Limitations of C-arm imaging systems in terms of feasible rotation angles are also
taken into account during optimization. The algorithm was evaluated on clinical
data for ipsilateral pulmonary vein isolation. Patient-specific C-arm angulations
were computed and compared against commonly used standard angulations in terms of foreshortening of planning structures in projection images. By applying individually optimized X-ray angulations, 28 % less foreshortening could be
achieved on average for biplane systems.
This work presents several contributions to the scientific field of digital multimedia forensics. The field addresses questions of authentication examination and source identification of multimedia files. All investigated multimedia entities are visual information carriers, i.e., images and videos. The subfield of authentication examination proposes methods which allow to verify or falsify the authenticity of the probed media object. Questions concerning source identification help to identify the source of the media file, where the term source offers different levels of granularity, for instance, the make or the model of a camera. Finding answers to questions of this kind, requires to operate with characteristic cues which suit the investigated context. Such cues, or features, can be derived from the byte stream of the file as well as from the visual content, i.e., the decoded pixels. The selected feature domain depends on the exact problem statement. Analysis on pixel data is especially revealing if the manipulation is of good quality. In contrast, applications that work with encoded files can be efficiently deployed to work on large scales of media file. Oftentimes, file-based methods are also helpful if the quality of the visual content makes working with pixel-based applications rather difficult.
Therefore, this dissertation subsumes multiple of our published works from the field of digital multimedia forensics, which propose various new developed approaches for both authentication examination and source identification. Additionally, the suggested approaches work in different domains. Through this diversity, we are capable to cover a broad field of research questions and can solve different types of challenges.
The first part of this work focuses on source identification. We first illustrate that the current literature does not cover developments introduced by the omnipresent usage of smartphones as imaging devices. Smartphones work with exchangeable software imaging routines. Hence, we suggest to introduce a software components axis orthogonal to a hardware components axis, i.e., proposing a new level of source identification. With the help of a large scale study on Apple devices, we demonstrate that different software versions have characteristic properties. We use this finding to predict software versions by resorting to machine learning. The influence of software when taking the image and when postprocessing is also investigated. Further, we illustrate how images from unknown models can be automatically associated with the make of the source device.
The second part of the dissertation focuses on videos and their representation in modern video codecs. We discuss neural networks in terms of their ability to identify double compression in videos. Double compression is a necessary step that happens automatically when manipulating a video. The introduced method is likewise capable to predict a specific parameter, the so-called ``group of pictures'' length in the first compression of a double compressed video. We further introduce methods which are capable to locate manipulations in pixel-domain, for example, persons spliced into the footage. We demonstrate that the same method can be successfully used to probe if video sequences originate from the same video. The methods are benchmarked against state-of-the art solutions and consistently outperform them.
Fluoroscopy-guided endovascular aortic repair (EVAR) has become the predominant treatment strategy for elective repair of abdominal aortic aneurysms in many western countries. During the procedure, stent grafts are implanted into the vasculature to reduce the pressure on the vessel wall and prevent a potentially fatal aneurysm rupture. The fusion of preoperative information with intraoperative fluoroscopy has garnered considerable interest as a means to reduce the use of nephrotoxic contrast agent and to decrease radiation exposure and procedure time, thus limiting the negative side effects of the procedure. A rigid overlay of pre- and intraoperative images, however,disregards the substantial deformations caused by the endovascular instruments.
This thesis proposes and analyses different approaches to maintaining the usefulness of image fusion during EVAR by identifying and modeling the instrument-induced deformation. Particular attention is given to compliance with the interventional workflow, specifically in terms of underlying assumptions, requirements and computational costs. An algorithmic pipeline is developed that allows for the segmentation of relevant instruments in fluoroscopic images, the reconstruction of the instrument shape from single X-ray projections and the intraoperative deformation modeling based on this information.
For instrument segmentation, a deep learning approach is proposed that is able to reliably identify and distinguish stent grafts, guidewires and catheters in a multi-task setting. In contrast to prior methods applied to these tasks, the approach requires neither an explicit model of the stent graft nor a handcrafted segmentation pipeline for each instrument. To allow for deformation modeling in 3-D, a method is designed that recovers the 3-D instrument shape from a single projection image. This avoids cumbersome repositioning of the fluoroscopic C-arm system. The approach estimates a second, virtual view of the wire based on the preoperative information that takes the intraoperative vessel deformation into account. To model the deformation solely on the instrument shape, an as-rigid-as-possible modeling is devised that allows to account for the interaction between the instrument and a surface model of the vessel in a flexible manner. This is extended by a semi-automatic approach that adapts the deformation in a “one-click” scenario and further increases the accuracy of the deformation modeling. In contrast to previous methods, a bone-based initial alignment of pre- and intraoperative data suffices for accurate deformation modeling. Other approaches that assess the deformation are either based on computationally expensive finite element analysis, require a contrast-enhanced acquisition of the aortic vessel tree or demand complex user interaction.
The pipeline is able to adapt the preoperative information to match the intraoperative deformation without the need for contrast injections. Still, available information can be integrated by using the semi-automatic method, resulting in a high in-plane accuracy of 0.5 mm at relevant anatomical landmarks. While each step of the proposed pipeline constitutes a value of its own, the proposed methods can be applied successively and allow for an adaption from X-ray segmentation to 3-D deformation modeling in less then 10 s, integrating smoothly with the interventional workflow. The results on clinical data show the potential to further improve navigation, reduce the use of nephrotoxic contrast agents and decrease radiation exposure, ultimately increasing the safety of both patients and medical personnel.
Die Komplexität der Entwicklung moderner Fahrzeuge steigt kontinuierlich, besonders auch durch das aktuelle Thema der automatisierten Fahrfunktionen. Im Gegensatz zu früheren Fahrerassistenzsystemen, die nur unterstützend auf das Fahrzeug einwirken, müssen zukünftige Systeme die vollständige Kontrolle über das Fahrzeug übernehmen können. Das stellt hohe Anforderungen an die Qualität und die Robustheit der Systeme und folglich auch deren Entwicklungsprozesse. Dennoch werden Informationen zwischen den einzelnen Entwicklungsphasen immer noch manuell und in textbasierter Form (z.B. als Lastenheft) ausgetauscht. Bisherige Versuche, dies durch modellbasierte Ansätze zu optimieren, scheiterten meist an dem mangelnden Adaptionswillen der Entwickler aufgrund des initialen Lern- und Umstellungsaufwands. Da zur Erprobung moderner Fahrfunktionen immer stärker auf Simulationen gesetzt wird, steigt auch die Bedeutung der dafür notwendigen Szenarienbeschreibungen. Diese werden bisher manuell, entweder als Text oder direkt als Simulationsmodell, erstellt. Zusätzlich ist eine belastbare Abschätzung des notwendigen Testaufwands gerade für Multisensorsysteme bereits in einer frühen Phase für eine solide Projektplanung unumgänglich. Solch eine Projektplanung umfasst auch eine Auswahl der einzusetzenden Softwareentwicklungsprozesse, -methoden und -werkzeuge. Die dafür verfügbaren Ansätze bieten aber weder einen schnellen graphischen Überblick, noch können sie sehr unterschiedliche Ansätze vergleichen. Die in dieser Arbeit vorgestellte Taxonomie bietet diese Möglichkeit auf Basis des V-Modells und zusätzlicher Annotationen. Da weder die Kundenerwartungen an automatisierte Fahrsysteme, noch die Bewertungen der Entwicklungsmethoden durch die Entwickler konstant bleiben, werden für eine aktuelle Bestandsaufnahme im Rahmen dieser Arbeit die Ergebnisse von drei Umfragen zu diesen Themen vorgestellt. Um einerseits die Übergänge zwischen den Entwicklungsphasen zu optimieren, andererseits aber den Adaptionsaufwand für die Nutzer gering zu halten, werden weiterhin zwei iterative textbasierte Konzepte zur Erstellung von Anforderungen und Szenarienbeschreibungen, jeweils mit einer passenden domänenspezifischen Sprache im Framework JetBrains Meta Programming System (MPS) vorgestellt. Bereits verfügbare Ansätze arbeiten zwar auch textbasiert, sind aber weder multilingual, noch verwenden sie verschieden abstrakte Daten oder können die für die Entwicklung notwendigen Übergabeartefakte (z.B. Modelle) automatisch generieren. Auch wenn es bereits Lösungen zur Testaufwandsschätzung für Multisensorsysteme gibt, benötigen diese entweder die physikalischen Sensormodelle oder Implementierungsdetails. In dieser Arbeit wird deshalb ein Ansatz vorgestellt, der schon in der Spezifikationsphase, allein auf Basis der Sensorcharakteristika, den zu leistenden Testaufwand für ein definiertes Konfidenzniveau ermitteln kann. Diese fünf Konzepte tragen zur szenarienbegleiteten Entwicklung automatisierter Fahrfunktionen bei.
Automatisiertes Fahren wird die zukünftige Mobilität grundlegend verändern. Die fortschreitende Digitalisierung bildet die Grundlage für technische Innovationen und Automatisierung der Fahraufgabe. Zahlreiche medienwirksame Auftritte verschiedener Firmen und Forschungsgruppen zeigen die technische Machbarkeit automatisierter Fahrfunktionen. Vor allem die steigende Komplexität zukünftiger Fahrfunktionen beinhaltet große Herausforderungen für die Entwicklung, Test und Freigabe. Derartige Systeme sollen die Fahraufgabe in definierten Verkehrsdomänen komplett übernehmen. Die Fahrfunktion muss alle auftretenden Situationen sicher beherrschen. Die Identifikation aller dafür relevanten Verkehrssituationen kann mit bekannten Methoden der Situationsanalyse nicht bewerkstelligt werden. Etwaige Methoden fokussieren nicht auf typische, normale und unkritische Verkehrssituationen, wenngleich diese für die Anforderungsanalyse und Spezifikation automatisierter Fahrfunktionen notwendig sind. Um die steigende Komplexität zu bewältigen, werden von der Forschung und Fachliteratur szenarienbasierte Methoden für die Entwicklung automatisierter Fahrfunktionen vorgeschlagen. Die vorliegende Arbeit präsentiert eine Methode zur Identifikation typischer Verkehrssituationen. Die Methodik basiert auf einem menschlichen Entscheidungsfindungsmodell und beinhaltet ein systematisches Vorgehen. Sie berücksichtigt Expertenwissen, sowie funktions- und entwicklungsspezifisch relevante Situationsmerkmale. Das systematische Vorgehen nutzt Simulationsmethoden zur Datenerhebung sowie Constraint-Programmierung. Somit wird das Constraint-Erfüllungs-Problem zur Suche nach relevanten Situationen in einer deklarativen Weise beschrieben. Die Validierung zeigt, dass relevante und typische Situationen identifiziert werden können, an die in einem unstrukturierten Vorgehen während der Anforderungsanalyse und Spezifikation des Zielsystems unter Umständen nicht gedacht wird. Zusammen mit dem durchgängigen szenarienbasierten Entwicklungsansatz SBSE zeigt die Situationsidentifikation großes Potential für die Entwicklung automatisierter Fahrfunktionen. Zudem bildet der vorgestellte Ansatz die Grundlage für weitere Forschung im Bereich der szenarienbasierten Entwicklung.
Deformities and degenerations of the spine and extremities are among the most common diseases in orthopedics. To determine the pathology and its severity, usually musculoskeletal measurements are performed in 2-D radiographs of standing patients. However, conventional X-ray images have the disadvantage that effects like magnification and distortion can impair the quality and reliability of the measurements performed. Also, the awareness of the hazard of ionizing radiation and the associated risks has vastly been increased. This leads to a higher demand for low-dose protocols, that do not compromise on image quality.
In this thesis, both issues will be addressed: low-dose imaging and a true-to-scale mapping. Therefore, a slot-scanning technique based on an ultra-small-angle tomosynthesis reconstruction is proposed and evaluated. For image acquisition, a commercially available twin robotic radiographic system is used. The system is equipped with a parallel-shift trajectory of the X-ray source and the detector, that allows the imaging of the entire body. During the evaluation, several important measures are addressed.
At first, the implemented motion compensation to alleviate image artifacts due to patient motion is evaluated. In a simulation study, it is shown that it is possible to correct sinusoidal motion up to an amplitude of 16 mm with a neglectable mean residual motion of 0.34 mm.
Regarding in-plane spatial resolution, it was found that resolutions between 8 and 18 line pairs per centimeter can be achieved depending on the acquisition settings. Various MTF-based filter kernels have been investigated that offer a certain image impression. It was shown that the creation of an isotropic resolution is at the cost of an anisotropic noise appearance. Using an analytical model, it was validated that the suggested slot size of 50 mm on the detector is sufficient to reconstruct images that do not suffer from blurring artifacts.
Using a model motivated by X-ray physics a scatter rejection coefficient, as well as dose saving potential, are calculated. Comparing the slot scan to conventional X-ray images with anti-scatter grid, dose savings of up to 71% were reported without significantly affecting the image quality in terms of signal-to-noise ratio.
Lastly, the ability to create true-to-scale images is investigated. Here, an analytical model is combined with a phantom study, that shows that neither magnification nor distortion in scanning direction are present in the reconstructed image.
Finally, three applications to extend the proposed approach and to further improve imaging in orthopedics are presented. At first, an advanced collimator control is proposed which combines tomosynthesis with the slot scan to alleviate anatomical occlusion, e.g. in the shoulder girdle area. Secondly, an approach to fuse a 2-D slot scan with additional 3-D information of the knee is presented. This method is dedicated to improving preoperative implant planning. Lastly, an autofocus-based method to generate smart radiographs from tomosynthesis reconstructions is proposed that aims to simplify the reading process.
Inner source (IS) is the use of open source software development practices and the establishment of open source-like communities within an organization. The organization may still develop proprietary software but internally opens up its development. IS promises to resolve problems of traditional software development by easing software reuse and enabling parties within an organization to collaborate across organizational boundaries.
However, it is unclear what elements constitute IS (problem I) and how to measure the presence and magnitude of IS collaboration (problem II). The large majority of research articles on IS to date are limited to qualitative results regarding IS. There are yet no quantitative studies on IS collaboration exploring how much IS collaboration takes place or how IS practices affect it (problem III).
We followed a three-phase research approach to address these problems. First, we performed an extensive literature survey and analyzed 43 IS publications. We found that four key elements constitute IS (shared cultural values, open development environment, communities around software, IS-specific scenarios) but that IS programs and projects differ on at least five dimensions (addressing problem I).
Second, we developed the patch-flow method (and a software tool implementing it) for measuring IS collaboration. Patch-flow is the flow of code contributions across organizational boundaries ("silos") such as organizational unit or cost center boundaries. We evaluated the method using case study research with a non-trivial industry organization and found it to be viable and useful to practitioners (addressing problem II).
Third, we performed a multiple-case case study with three large software organizations running a total of five IS program. We identified the used IS practices and the resulting patch-flow. We found patch-flow to exist in all organizations but that only fraction of all code contributions to IS projects constitute patch-flow. We observed that the number of IS practices implemented correlates with the distance of parties involved in collaboration. This indicates that IS is particularly suited to enable collaboration between parties of high distance in an organization (addressing problem III).
This thesis delivers a holistic definition of IS and the first classification framework for IS programs and projects. Researchers can use such a framework to reason about generalizability of their results more precisely. The patch-flow measurement method is the first of its kind to measure and quantify IS collaboration and can serve as a base for further quantitative analyses of IS collaboration. The exploration of the patch-flow in the three industry cases can serve as example and benchmark for practitioners.