004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Doctoral Thesis (184)
- Report (28)
- Article (11)
- Study Thesis (9)
- Working Paper (4)
- Master's Thesis (3)
- Book (2)
- Conference Proceeding (1)
- Habilitation (1)
Has Fulltext
- yes (243)
Keywords
- Betriebssystem (12)
- Computertomographie (10)
- Machine Learning (9)
- Maschinelles Lernen (9)
- - (7)
- Echtzeitsystem (7)
- Eingebettetes System (7)
- Rekonstruktion (7)
- Visualisierung (7)
- Bildverarbeitung (6)
Institute
- Technische Fakultät -ohne weitere Spezifikation- (140)
- Technische Fakultät (49)
- Department Informatik (33)
- Medizinische Fakultät -ohne weitere Spezifikation- (5)
- Medizinische Fakultät (4)
- Department Elektrotechnik-Elektronik-Informationstechnik (2)
- Fakultätsübergreifend / Sonstige Einrichtung (2)
- Institut für Medizininformatik, Biometrie und Epidemiologie (2)
- Zentrale Universitätseinrichtung -ohne weitere Spezifikation- (2)
- Department Mathematik (1)
With the introduction of computer-aided methods for diagnosis and intervention, patient outcomes of many clinical procedures have been increased tremendously in the last decades. An essential task thereby is the registration of inter- or intrapatient images that are acquired using a single or multiple imaging modalities. In principle, vasculature permeates through all organs of human body. As it is interbedded spatially, it reflects the pathological changes of the surrounding tissue. Thus, accurate and robust registration methods to align vessel structures or images have the capability to benefit a large variety of clinical procedures.
Numerous methods that propose novel vessel registration algorithms have been published to date. Both intensity and feature-based methods are proposed, where a clear dominance of feature-based, especially point-based methods, can be derived from the summary of the state-of-the-art. Although the methods proposed in the literature demonstrate promising results for dedicated clinical set up, a generalized approach for vessel registration regardless of region of interest has not been proposed and evaluated. Furthermore, most state-of-the-art methods requires cumbersome hyperparameter tuning, which reduce their clinical applicability consequently. Therefore, a primary goal of this thesis is to develop, implement and evaluate an accurate and inherent robust vessel registration framework with the generalizability and applicability to various clinical applications. Recent progress in machine learning and deep learning opens new perspectives to improve the efficiency of the conventional registration methods for vasculature. Promising registration results have been achieved with different learning paradigms, such as reinforcement learning, supervised- and unsupervised learning with synthetic and clinical data. Hence, a secondary goal of this thesis is to investigate the potential of other machine learning and deep learning techniques to resolve vessel image registration problems.
One of the two point-based registration frameworks investigated in this thesis utilizes mixture models to align the centerlines of vasculature. Thereby, a hybrid mixture model is proposed as a key part of the framework, that models the spatial and topological information of the vasculature simultaneously and is consequently equipped with significant discriminative capacity. Moreover, an automatic refinement mechanism to identify regions with missing data is formulated. The final transformation to the target image can be estimated with different methods such as Thin-Plate-Spline, B-Spline or Gaussian Process. The evaluation with synthetic, phantom and clinical data acquired from different clinical procedures demonstrate the accuracy and inherent robustness of the entire vessel registration framework regardless of clinical set up.
The other approach formulated in this thesis makes use of imitation learning paradigm to overcome the weakness and challenges of adopting reinforcement learning for image registration problem. Under the guidance of a demonstrator an agent is trained to find the optimal displacement of landmarks. The network architecture is inspired by PointNet that is able to consume raw point data as input. The proposed framework is evaluated with clinical fundoscopic images in a retrospective manner. A special attention of experiments conducted is to understand the principles of our imitation network, where the influences of the model parameters are analyzed in detail. The evaluation results demonstrate the potentials and effectiveness of the imitation network to register vascular images for the first time.
The topic of detection, tracking and classification of weak targets in interferencedominated
environment using radars is studied in this thesis. The problem is approached
from the perspective of both resource-friendly radar systems and resourcelimited
radar systems. In the case of resource-friendly radar systems, cognitive architectures
that can progressively sense the environment and adjust its operating
waveform-receiver filters are analyzed. The study of joint optimal transmit waveform
and receive filters such that they operate optimally in the presence of weak
targets and interference has been of huge interest in both academia and industry. Recent
advances in adaptive waveform synthesis have focused on joint design and implementation
of knowledge-aided receiver signal processing techniques and adaptive
transmit signals. This closed loop radar framework that mimics mammals’ neurological
capability to tune system parameters in response to cognition of the environment
is commonly referred to as "cognitive radar" or "fully adaptive radar".
In this thesis, the output signal to interference noise ratio maximizing jointly optimal
transmit waveform and receive filter for a single-input, single-output radar
design in the presence of extended target and colored interference is presented. The
ambiguity function, processing gain and Cramer-Rao bound for such waveformfilters
are derived. Apart from the optimal waveform dictated by the joint optimization
strategy, it is desired that the radar transmit waveforms possess constant time
envelope to drive the power amplifiers at saturation. This constraint requires reconstruction
of constant envelope signals, which is addressed using proposed relaxed
iterative error reduction algorithm. In general, iterative algorithms are sensitive to
the initial seed, which is solved by deriving a closed-form solution making stationary
phase assumption. In the case of multiple-input multiple-output (MIMO) radars,
the interference between signals can significantly limit the radar’s ability for observation
of weak targets in presence of stronger targets and background clutter. For the
multi-channel radars, in this thesis orthogonally coded Linear Frequency Modulated
(LFM) waveforms is proposed, wherein consecutive complex LFM signals in a frame
are coded by orthogonal codes, namely Golay complementary, Zadoff Chu, direct
spread spectrum, space-time block coded, discrete Fourier transform and Costasbased
sequences. The orthogonal codes to modulate the LFM across symbol form
fixed library waveforms leading to partial adaptation instead of arbitrary waveform
dictated by "fully adaptive radar". The ambiguity function for such orthogonallycoded
MIMO radar is derived, and the waveforms are analyzed in terms of their
ambiguity function and imaging performance.
With the advancement in silicon and packaging technology, radars have evolved
from high-end aerospace technology into relatively low-cost Human-Machine Interface
(HMI) sensors. However, the sensor in such industrial and consumer setting
should have a small form-factor and low-cost, thus they cannot sustain cognitive
architectures to detect and classify weak human targets. To improve detection
and classification performance for HMI applications, novel processing and learning
algorithms are proposed. In practice, there are several challenges to learningbased
solutions using low-cost radars particularly with respect to open set classification.
In open set classification, the system needs to handle variations of the input
data, alien operating environment and unknown classes. Conventional deep learning
approaches use a simple softmax layer and evaluate the accuracy on known
classes, thus on closed set classification. The softmax layer provides separability of classes but does not provide discriminative class boundaries. Hence, many unknown
classes are erroneously predicted as one of the known classes with a high
confidence, resulting in poor performance in real world environments. Other challenges
arise due to the inconspicuous interclass difference between features from
one class and other closely-related class and large intra-class variations in the radar
data from the same classes. To address these challenges, novel representation learning
algorithms along with novel loss functions are proposed in this thesis. Unlike
conventional deep learning approaches using softmax that learns to classify, deep
representation learning learns the process to classify by projecting the input feature
images to an embedded space where similar classes are grouped together while dissimilar
classes are far apart. Thus, deep representation learning approaches simultaneously
learn separable inter-class difference and compact discriminative intraclass,
essential for open set classification. Specifically, the proposed representation
learning algorithms are evaluated in the context of gesture sensing, material classification,
air-writing and kick sensing HMI applications.
In the context of the United Nations Paris Climate Agreement of 2016, the majority of the global leading automotive manufacturers are committed to electrifying their fleets. A particular challenge in achieving this transformation is the efficient and economical development of new types of battery systems to meet the high customer requirements for electric range and fast-charging capability as well as legally required safety standards. These requirements must be guaranteed over the entire vehicle lifetime. However, the battery ages over time due to electrochemical degradation effects during operation. As a consequence, the battery state needs to be continuously monitored and analyzed. Thereby, new ways of analysis are required, as the current characterization of the battery state during maintenance is associated with high financial efforts, time-consuming measurement procedures, and is limited to a low number of available test capacities.
An innovative and scalable alternative is offered by deploying battery models. In this context, this thesis addresses the research question to what extent battery-electric modeling is applicable to determine the battery state, using only in-vehicle operating data. To this end, this approach is divided into two research areas: First, the modeling of current electric battery behavior based on in-vehicle data, and second, the methodology for analyzing the battery state.
While conventional battery models are mainly based on physical system representations, this thesis focuses on novel data-driven methods that are able to independently learn relevant correlations from vehicle operational data and to use this information to continuously update the battery model. In a preliminary analysis, artificial neural networks with a sliding window approach proved to be a suitable candidate to learn the electric battery behavior during operation.
In terms of the methodology, the analysis of the battery state is considered separately at cell-level and system-level due to the high complexity of the battery behavior and the possible operating conditions inside electric vehicles. The development and evaluation of the methodology for battery state determination are carried out at cell-level by using extensive operating and testing conditions. In particular, pulse tests and incremental capacity analysis achieved high accuracies. However, the results of this work also show that the cell-level methodology cannot be directly extrapolated to system-level battery behavior due to systematic and statistical uncertainties.
Apart from the current limitations of neural battery models, various optimization potentials in the area of the training process and data preprocessing are identified. In conclusion, the obtained findings provide an outlook for further applications in the context of data-driven battery analysis.
The global prevalence of various metabolic pathologies, such as the metabolic syndrome, obesity, diabetes or nonalcoholic fatty liver disease (NAFLD) has increased steadily in recent decades, and takes on the proportions of a worldwide epidemic. It is therefore important to understand background and pathogenesis of these disorders to reverse this trend. In the last years, research in Magnetic Resonance Imaging (MRI) focused on the extraction of quantitative biomarkers from qualitative MR image data, which allows to quantitatively estimate relaxation properties or signal ratios in the form of parameter maps. Quantitative MRI (qMRI) techniques are accompanied by the promise of earlier disease detection and better grading and staging of diseases. Since MRI is a non-invasive and non-ionizing modality, it allows large-scale research studies and clinical assessment. Widespread clinical adoption of qMRI for the investigation of the described disorders is still limited. Conventional methods suffer from lengthy scan times and motion sensitivity, since a number of images have to be acquired and since often Cartesian sequences are applied. In the abdomen this is challenged by respiration, which is why data is typically acquired during breath-holding. Moreover, currently applied biomarkers are limited in terms of their significance for the prediction of disease states.
This work includes techniques for the MRI-based estimation of a new biomarker called triglyceride saturation state, which potentially gives more insights into the pathogenesis of metabolic abnormalities or the risk of breast cancer development. To this end, two methods are proposed that calculate 3-D parameter maps of the relative fractions of saturated, mono-unsaturated and poly-unsaturated fatty acids with regard to the total fat content. Both techniques efficiently sample Cartesian multi-echo gradient-echo (GRE) data using bipolar readout gradients and apply low-rank denoising before parameter estimation. Phase errors are either addressed analytically using an eddy-current phase parameter, or by applying novel echo-dependent phase maps. In vitro, the developed methods yielded accurate and reproducible results across different protocols. In vivo, applicability was assessed using measurements in healthy volunteers and patients in the clinical sites abdomen and breast.
Moreover, two methods that allow proton-density fat fraction (PDFF) and R2* quantification during free-breathing are proposed. A motion-robust radial stack-of-stars sequence is applied, and iteratively reconstructed using a model-based technique that combines Parallel Imaging (PI) and Compressed Sensing (CS). The developed approaches are capable of estimating self-gated respiratory motion-averaged and motion-resolved parameter maps from undersampled as well as fully-sampled data. Compared to a conventional state-of-the art technique, the motion-averaged reconstruction achieved accurate PDFF values, and the motion-resolved reconstruction accurate PDFF and R2* results in n = 14 patients enrolled in a hepatobiliary research protocol. For patients unable to suspend respiration, the developed methods are promising alternatives to conventional techniques, which typically apply motion-sensitive Cartesian sequences and sample data during breath-holding.
In summary, in this work relevant contributions are made to the fields of fat quantification, fatty acid composition calculation and R2* estimation. The proposed techniques address currently existing limitations in the field of qMRI for imaging various disorders in the abdomen.
Query processing is a traditional yet still ongoing field of research. Its significance is derived from the increase of data created and processed every day and the opportunities provided by the analysis of the data. In today’s world, complete businesses are built on top of sophisticated data processing capabilities. However, with the increase of data, processing these huge amounts of data becomes more and more challenging. This is not only because of the time and resources it takes to process such amounts of data but also due to the energy costs. Consequently, researchers broadened the range of processing architectures to investigate for query processing beyond traditional processor-based systems. Next to programmable graphic processing units (GPUs), field programmable gate arrays (FPGAs) have become of great interest due to their unique features. FPGAs not only allow for the construction of highly optimized hardware circuits for specific tasks but also enable the adaption of hardware to the tasks during runtime. Hence, many researchers have presented various proposals to exploit the features provided by FPGAs. Although the proposed systems can achieve high throughput and efficiency in general, they are often not able to accelerate queries that haven’t been considered during their design. Performance and efficiency can only be gained best through specialization, and thus, a system should adapt to an incoming and unknown query. This is possible with FPGAs due to their ability to be reconfigured fully or in parts during runtime. However, this comes at the cost of high startup times as the FPGA has to be configured according to the query prior to the execution of the query. Furthermore, it is almost impossible to generate hardware configurations for every possible query.
This thesis introduces an innovative FPGA-based near-data processing system able to process a wide variety of queries at I/O-rate (line-rate). It is based on reconfigurable and parametrizable accelerators. The accelerators are composed of parametrizable modules within a library. These modules do not only implement a specific operator for a specific type but are optimized to implement operators for multiple types or even multiple functions without a drastic increase of resources. Another contribution of this thesis is the concept of optimistic query processing for demanding operators such as the join and regular matching operator. It is based on the idea to approximately filter as much data as possible in hardware without removing tuples that should be kept. The resulting, often reduced, intermediate table is guaranteed to be a superset of the accurate filter operation. A software-based operator implementation can then be applied on an intermediate table with less tuples to finalize the operation. As an example, the implementation of a module for regular expression matching is presented.
Equipped with a parameter sequencer, accelerators assembled from this library are able to implement a greater variety of queries by setting the parameters of the modules according to the query to process. However, the schema which the tables are stored in also influences the design of the accelerator and, therefore, may limit the types of queries it can implement. For this, a hardware unit called ReOrder is introduced. It decouples the table schema and storage layout from the accelerator enabling all accelerators to be used on every table with row-oriented and column-oriented storage layouts. Even though the developed accelerators are able to implement a wide variety of queries, no one-fits-all accelerator is possible. Consequently, the system is designed to concatenate multiple partially reconfigurable (i.e., exchangeable) accelerators without a decrease in tuple throughput. This increases the types of queries that can be processed even further. As accelerators might not use all available resources within a partially reconfigurable FPGA region, the idea of in situ statistics generation is proposed. In situ statistics modules can utilize the free resources to gather information on the table that is processed by an accelerator without additional costs in terms of time.
Complementary to the hardware related parts already mentioned, a control software managing the execution of a query on such a system is presented as well. Starting from the basic components needed to execute queries on the platform, the description goes into depth on the particularities of such a FPGA-based query execution system. Especially the query placement problem which describes the problem of finding a query-specific-configuration of the system’s hardware according to an incoming query is formulated. In addition, the challenges to obtain an optimal placement is discussed and exemplified using the problem of buffer assignment. Afterwards, the parameters of the modules have to be generated. In this regard, an algorithm to obtain the parameters for a ReOrder unit is presented and evaluated in depth. Additionally, considerations about parameter generators for a histogram module and the optimistic regular expression matching module are provided.
Finally, an implemented prototype of the system called ReProVide unit has been evaluated. It is able to provide I/O-rate processing of simple as well as complex queries. Compared to a software-based in-memory database system executed on an ARM processor, queries could be executed 19.9× faster on the prototype on average. When executed on an x86 processor, comparable execution times have been observed. This means the prototype system storing the tables on two solid states drives was able to process queries as fast as an x86 system holding the tables in memory. Furthermore, the prototype built is shown to be very energy-efficient, consuming only less than 25% of the energy consumed by the x86 system on average.
Magnetic Resonance Imaging (MRI) is a tomographic imaging technique using the principles of nuclear magnetism of mostly the 1H nuclei, which are largely contained in the human body. This allows for insights to the interior tissue distributions. While its non-ionizing nature is advantageous compared to, e.g., Computed Tomography (CT), standard MRI only yields qualitative data allowing to discriminate different tissue boundaries without providing quantitative values. To enable quantitative MRI, multiple dimensions, i.e., spatial voxels and different physiological modes, e.g., cardiac and/or different contrast phases, have to be acquired simultaneously for a comprehensive diagnosis. From such data, dynamically resolved volumes can be reconstructed, allowing the assessment of quantitative biomarkers from different imaging contrasts. In this thesis, we investigated novel and extended solutions for multidimensional and quantitative MRI. The proposed methods were trained and evaluated on phantom data, as well as on several acquired in-vivo data sets from in total 49 volunteers.
First, we devised a Deep Learning (DL)-based reconstruction approach for Magnetic Resonance Fingerprinting (MRF) acquisitions. MRF acquires several highly undersampled imaging contrasts of the same slice by randomly changing the sampling parameters, resulting in temporal sequences for each spatial position. Our proposed framework uses spatial patches of these sequences as input and regresses them to quantitative relaxation T1 and T2 values. The results showed that the DL-based reconstruction yields accurate results well comparable to the State-of-the-Art (SOTA) reconstruction while reducing the computational effort during runtime.
We extended a 3-D cardiac sequence to sample single-contrast (SC) data with cardiac and respiratory phases continuously. Repetitive Inversion Recovery (IR) preparation pulses were further inserted to enable the sampling of multi-contrast (MC) continuous data on the T1 relaxation curve. A Cartesian pseudo-spiral sampling was used to enforce a pseudo-random undersampling pattern for the subsequent Compressed Sensing (CS)-based reconstruction. We proposed a pipeline which binned acquired data samples into different anatomic and contrast phases prior to the CS reconstruction. The T1 relaxation values fitted from different contrast bins were shown to be well comparable with the ground-truth (GT) values, as well as with a SOTA T1 mapping sequence in a quantitative phantom.
Lastly, we proposed motion extraction and binning pipelines towards an in-vivo application of the continuous SC and MC sequences for cardiac MRI. A DL-based classifier was used to determine the R-waves directly from the acquired imaging data to bin all samples into different cardiac phases. The extracted R-waves were comparable to the R-waves from the SOTA Electrocardiogram (ECG)-sensor for both SC and MC in-vivo data, but without the use of an additional sensor. Moreover, our DL-based approach recognized R-waves that were not detected by the ECG-sensor due to its interference with the magnetic field. Further, we proposed frameworks for extracting the respiratory motion directly from the acquired data, either from central 1-D projections, or from low-resolution 2-D images with an adapted version of the sampling scheme. Our results showed that the pipelines can be used for both types of cardiac in-vivo SC and MC data with the same set of parameters. Further, the image quality for all reconstructions could be improved with the utilization of the pipelines and the reconstruction of data samples from one respiratory phase.
Minimally invasive procedures leverage X-rays for online diagnostics and planning, device navigation, and confirmation of successful patient treatment. Although these interventions yield better outcomes than open surgeries at a lower risk, X-rays introduce considerable health concerns. Prolonged irradiation of the same skin area results in skin rashes, hair loss, and even ulceration. Besides, every exposure to X-rays entails the stochastic risk of developing some form of cancer, deeming radiation dose management mandatory. Radiation protection builds on two fundamental pillars -- dose monitoring and avoidance. Unfortunately, current dose tracking systems used in a clinical environment simplify the actual physics, and particularly scattered radiation, to meet real-time requirements in the interventional suite. A common counter measure to scatter is the use of an anti-scatter grid. While it reduces X-ray scatter in the image, it can increase the overall dose. To improve the situation, efficient methods to quantify scatter are desirable since they yield a better dose estimate. They can also facilitate removal of the anti-scatter grid, hence, reducing X-ray dose. This thesis investigates approaches to integrate prior knowledge into neural networks to speed up the dose and scatter estimation while maintaining physical plausibility.
We integrated a Monte Carlo simulation toolkit with digital twins of interventional X-ray systems and patients. A novel filtering-based technique allows for a quick comparison of computational and experimental dose studies. Using this framework, we cross-validated simulations with empiric measurements. We found considerable deviations induced by tissue-equivalent plastics and improper phantom registration.
As part of this framework, we developed a novel method comprising a comprehensive patient organ model, a fast primary X-ray fluence simulation, and a convolutional neural network to refine this simulation. Patient-specific absorption and density maps guide the network in mapping the primary fluence to primary and scatter dose distributions. Our method generalized well to unseen patients and anatomic regions in a computational study on skin dose and only slightly lost accuracy compared to the Monte Carlo simulation. However, we achieved these results in a fraction of the time, less than a second. The results also encourage investigating organ dose estimation.
For the case when only a patient-shape model is available, we extended the conventional skin dose formalism by a learning-based backscatter estimator. It infers a latent scatter representation from X-ray images, which encode the patient's anatomy. In order to improve the physical plausibility, both X-ray projection scatter and backscatter are calculated simultaneously. Our experiments revealed that both scatter distributions were accurately estimated. Also, a comparative study showed the superiority of the multi-task approach over the conventional and single-task methods. The accuracy is comparable to the previous approach at a much higher computational efficiency.
We also studied how to enforce low-frequency properties during learning-based X-ray projection scatter estimation to rule out corruption of relevant high-frequency information. As part of this work, we extended a shallow convolutional encoder by multivariate B-spline evaluation. This way, we decreased the parameter and runtime complexity by orders of magnitudes while being on par with a state-of-the-art neural network. Moreover, we demonstrate data integrity and robustness to unseen noise. A phantom study involving cone-beam computed tomography images yielded encouraging results hinting at a potential clinical application of the proposed method.
Sports analytics research has major impact on the development of innovative training methods and the broadcast of sports events. This dissertation provides algorithms for both kinematic analysis and performance interpretation based on unobtrusively obtained measurements from wearable sensors. Its main focus is set on the processing of 3D-orientation features and the exploration of their potential for sports analytics. The proposed algorithms are described and evaluated in five exemplary sports. In scuba diving, rowing and ski jumping, the 3D-orientation of the body/boat/skis is determined and further processed to analyze and visualize the motion behavior. In snowboarding and skateboarding, the board orientation is calculated and processed for motion visualization and machine learning. Board sport tricks are automatically detected and subsequently classified for trick category and type. The methods of this work were already partially applied for TV broadcast of international competitions (e.g., Olympics 2018). Additionally, they can support sports science research for establishing thorough investigations and innovative training methods.
This thesis studies the use case of hyperspectral images in two applications, namely remote sensing and art history. The common challenge that is present in both these applications is the limited availability of labeled data. This limitation is caused by the tedious, time-consuming, and expensive manual data labeling process by the experts in each respective field. At the same time, hyperspectral images and their feature vectors are typically significantly high dimensional. Combination of these two challenge the supervised machine learning algorithms. In order to tackle this problem, this work proposes to either adapt the limited data to the classifier, or adapt the classifier to the limited training data.
Any discrete data can be assumed samples from an unknown distribution which is not accessible. Having access to this underlying distribution enables drawing infinite number of data points. Motivated by this idea, this work takes advantage of Gaussian mixture model (GMM) to estimate the underlying distribution of each class in the dataset. Considering the limited available data, in order to limit the number of parameters, GMMs are constrained to have diagonal covariance matrix. Both on phantom data and the real hyperspectral images, it has been shown that adding only a few synthetic training samples significantly improves the untuned classifier's performance. Further, it has been observed that the untuned classifiers reinforced with the synthesized training samples outperform the tuned classifier's performance on the original training set. The latter suggests that the synthetic samples can replace the expensive parameter tuning process in classifiers.
In a different approach, this work proposes to adapt the classifier to the limited data. Traditional classifiers with high capacity often overfit on extremely small training data sets. The Bayesian learning regime has a hardcoded regularization property in its formulation. This property motivates the idea of using Bayesian neural networks to remedy the overfitting problem of the normal (frequentist) convolutional neural networks (CNNs). The experimental results demonstrate that for the same convolutional network architecture, the Bayesian variant outperforms the frequentist version. Using ensemble learning on the sample networks drawn from the Bayesian network further improves the classification performance.
Moreover, studying the evolution of the train and validation loss plots for both the Bayesian and the frequentist CNN clearly shows that the Bayesian CNN is significantly more robust against overfitting in the case of extremely limited training data and has higher generalization capability in this situation.
For the second application, i.e., the layer separation in the old master drawings, this work studies the effectiveness of hyperspectral images, introduces to use the extended multi-attribute profiles (EMAPs) and hyper-hue features, and compares them against the other state-of-the-art features, using synthesized and real data. The results show that the EMAPs and hyper-hue are more informative and representative feature spaces. Mapping the HS images to these spaces resulted in more accurate color pigment layer segmentation.
In recent years, classical pathology has been in a constant state of change towards digital pathology.
Digitisation forms the basis for supporting the work of pathologists in many ways by utilising computer-assisted methods. Among other things, modern methods of pattern recognition can accelerate previously time-consuming quantitative analyses, support reporting, and thus standardise and improve the results of pathological examinations. Digitalisation also offers new possibilities and ways of interdisciplinary cooperation and coordination between experts at an international level.
This work presents novel methods and software that can be used to support both the analysis of image data and the cooperation between users. For these purposes, species-independent cell detection methods are developed and analysed, allowing the automatic quantification of pulmonary haemorrhage and asthma on cytological image data. The basis for developing these methods is qualitatively, and quantitatively comprehensive data sets created online and interdisciplinary with the annotation tool EXACT developed in this thesis.
For cell detection, an established deep learning-based method of object recognition is optimised regarding resource consumption and training pipeline to meet the specific requirements of pathological problems such as image size and unbalanced annotation distribution. Employing the developed detection methods, algorithmically, pre-labelled images can be presented to the annotators for verification and thus, can efficiently increase existing data sets in quantity and quality. In this context, the thesis also investigates whether humans adopt algorithmic errors.
The generalisation of algorithms across species and scanner boundaries is of great scientific and economic interest. Therefore, as an initial step for this dissertation, a dataset with human, feline and horse cell samples on the question of pulmonary haemorrhage is created which is the first of its kind in terms of both the number of annotated cells and the number of species. A registration method is developed in a second step that abstracts the pyramid representation of pathological images using a quadtree. In contrast to established methods, this approach does not require pre-segmentation or images with lower resolution and is the only one that can be applied natively to images with cytological questions. The findings and algorithms from the developments and studies described in this thesis are incorporated into the realisation of the online annotation tool EXACT in close cooperation with interdisciplinary experts. In contrast to established tools, EXACT places its focus on the traceability of annotation creation and various visualisation techniques to increase the quality of annotations.
The presented methods for object recognition achieve accuracy that is comparable to the one achieved by human expert while at the same time increase the efficiency and reproducibility of grading cytological images. Thus, these methods form the basis for creating and publishing qualitatively and quantitatively comprehensive multi-species datasets using the annotation tool developed in this thesis. While the routine use of the developed algorithms still requires extensive cross-institutional studies that independently prove the added value of these procedures, the online annotation tool EXACT is already successfully used in several companies and research projects for the creation and registration of a broad range of datasets.
We introduce PVSC-DTM (Parallel Vectorized Stencil Code for Dirac and Topological Materials), a library and code generator based on a domain-specific language tailored to implement the specific stencil-like algorithms that can describe Dirac and topological materials such as graphene and topological insulators in a matrix-free way. The generated hybrid-parallel (MPI+OpenMP) code is fully vectorized using Single Instruction Multiple Data (SIMD) extensions. It is significantly faster than matrix-based approaches on the node level and performs in accordance with the roofline model. We demonstrate the chip-level performance and distributed-memory scalability of basic building blocks such as sparse matrix-(multiple-) vector multiplication on modern multicore CPUs. As an application example, we use the PVSC-DTM scheme to (i) explore the scattering of a Dirac wave on an array of gate-defined quantum dots, to (ii) calculate a bunch of interior eigenvalues for strong topological insulators, and to (iii) discuss the photoemission spectra of a disordered Weyl semimetal.
Visual Localization and Domain Adaptation for Road Scene Reconstruction from Car Fleet Images
(2021)
Accurate and reliable environment perception is essential for the development of automated driving functions. Camera systems represent the most versatile sensor category, but are also severely affected by domain shifts, in particular changing viewing conditions. This poses a challenge to algorithm development, training data acquisition, and test coverage. To meet the resulting demand for large-scale and well-balanced datasets at an affordable cost, classical data acquisition campaigns could be complemented by data collection from series production car fleets. Although most modern cars are connected with central data back-ends, economic restrictions regarding the amount of data that can be collected and transmitted apply.
Thus, in this work, a novel concept to leverage a car fleet for flexible and scalable data collection from specific road sections is presented. The reconstruction pipeline addresses the problems of camera pose estimation as well as domain adaptation for visual localization and image synthesis. To simulate the transition from continuous video sequences to image collections acquired by car fleets, a database specifically recorded on repetitive drives along a test route is introduced. Extending a Structure from Motion pipeline by a Semantic Keypoint Selection and GPS outlier detection, the reconstruction process is stabilized and localization accuracy is improved. However, the experimental results demonstrate that image registration rates are heavily limited by the repeatability of hand-crafted local features under changing conditions.
Therefore, this thesis investigates novel approaches to achieve robust cross-domain local feature matching employing image-to-image translation models based on Generative Adversarial Networks. Two methods aiming for domain adaptation – either in the low-level pixel space or in a more high-level convolutional feature space – are presented and compared in various experiments focusing on image matching, long-term visual localization, and Structure from Motion. The results show that pixel domain alignment is able to enhance correspondence search for rather small domain gaps, but fails for large differences, e. g., between day and night images. In contrast, integration of image-to-image translation into a self-supervised training process enables training of domain-invariant local feature representations. The method outperforms not only pixel domain adaptation, but also other hand-crafted and self-supervised local features in various long-term visual localization benchmarks, while it is competitive with state-of-the-art supervised features.
Besides visual localization, synthesis of homogeneous sequences demands alignment of images in a consistent appearance. This work introduces a novel dual-mode training concept combining a multimodal, unsupervised image-to-image translation network with a style alignment loss inspired by neural style transfer. As a result, the model is explicitly trained to apply the style of a reference image to the translation output. An expert survey and quantitative metrics demonstrate the enhanced quality and controllability of the image appearance compared to related work.
Since the majority of methods presented is trained without manual ground truth annotation, this work emphasizes the potential of unsupervised and self-supervised representation learning. The experimental results demonstrate that domain adaptation problems particularly benefit from unsupervised learning since labeled data are often not available. Future work could adapt the concepts investigated for road scene reconstruction to other tasks and applications dealing with similar challenges.
Föderierte medizinische Forschungsdatenbanken: Architekturen, Datenintegration und Abfragelogik
(2021)
Elektronische medizinische Daten werden heute neben der medizinischen Versorgung der Patienten auf vielfältige Weise genutzt, insbesondere in der Forschung. Im Rahmen der personalisierten Medizin werden jedoch immer größere Datenmengen benötigt, um aussagekräftige Analysen durchführen zu können. Die offensichtliche Lösung, die Daten von mehreren Standorten zu kombinieren und zu analysieren, bringt jedoch eine Reihe von Herausforderungen mit sich. Drei davon werden in dieser Dissertation behandelt: Architekturen, Datenintegration und Abfragelogik.
Eine wichtige Grundlage bilden Softwarearchitekturen, in denen Daten datenschutzkonform gespeichert und analysiert werden können. In den vergangenen Jahren hat die Bedeutung der Zusammenarbeit über Standorte oder sogar Konsortien hinweg zugenommen. Diese Dissertation liefert zwei Beiträge zur Übersetzung von elektronischen Kohortenabfragen und ihrer Integration in föderierte Forschungsarchitekturen. Auf diese Weise werden Forschungsnetzwerke, selbst wenn diese auf unterschiedlichen Technologien basieren, für verteilte Kohortenabfragen interoperabel.
Die Heterogenität medizinischer Daten, die an verschiedenen Orten generiert wurden, stellt eine Barriere für die vernetzte Forschung dar. Mit dem Ziel, diese zu überwinden, wird in der Dissertation eine Methode vorgestellt, die ein halbautomatisches Mapping und eine Zusammenführung von heterogenen Datensätzen mithilfe eines lexikalischen Ansatzes ermöglicht.
Für die Analyse medizinischer Daten wurden in den letzten Jahren verschiedene benutzerfreundliche Abfragewerkzeuge entwickelt. Diese können jedoch Einschränkungen hinsichtlich der umsetzbaren Abfragekomplexität aufweisen. In einer weiteren Arbeit dieser Dissertation wird daher untersucht, inwieweit die derzeit üblichen deklarativen Methoden bei elektronischen Kohortenabfragen durch prozedurale ergänzt werden können. Abschließend befasst sich der letzte Beitrag dieser Dissertation mit der Modellierung von temporalen Beziehungen in Kohortenabfragen und stellt hierfür einen neuen grafischen Ansatz vor.
Zusammenfassend zeigen die Ergebnisse dieser Dissertation, dass die Zusammenarbeit in einzelnen oder zwischen mehreren Forschungsnetzwerken durch die intelligente Verarbeitung von elektronischen Kohortenabfragen weiter verbessert werden kann.
Digital images and videos have taken an outstanding role in many areas of everyday life, e.g., for documentation and communication of events. However, the availability of sophisticated software applications makes it straightforward to realistically manipulate digital footage. This can entail detrimental consequences. The
goal of multimedia forensics is to provide as much information as possible on the
origin, history and authenticity of multimedia samples. Over the past two decades,
numerous successful algorithms have been proposed to address this goal. One of
the major contemporary challenges of multimedia forensics is to maintain algorithm
performance under strong lossy compression. Lossy compression sacrifices signal
fidelity for reduced bit rates, and is particularly widespread in online and mobile
applications.
In this dissertation, we present several contributions for robust forensic analysis of
strongly compressed images. First, we propose a taxonomy of existing multimedia
forensics algorithms, that categorizes approaches based on their relation to compression. We identify three major groups: The family of statistics-based algorithms
being impeded by compression, the family of compression-based algorithms that
are symbiotic to compression, and the family of physics-based approaches that are
largely insensitive to compression. Based on this categorization, we identify common
strengths and major limitations, as well as potential remedies. Second, we make
several algorithmic contributions in the group of physics-based and statistics-based
methods for robust and flexible forensic image analysis.
Arguably the main limitations of physics-based approaches are their restrictive
assumptions on scene composition, and often requiring manual annotations. Different
from other color-based works that solely model the illuminant color, we propose a more
descriptive forensic cue that jointly models the influence of in-camera processing and
illuminant conditions in a supervised fashion. We further propose a metric learning-
based extension of the color descriptor, that requires much weaker supervision, and
thereby is amenable to significantly larger training datasets, enhancing performance.
We show that our proposed descriptor is very robust against compression, and that it
outperforms state-of-the-art splicing detectors on low-quality images, without being
restricted to particular scene composition and without requiring user input.
One of the main limitations of statistics-based approaches is that they typically
strongly deteriorate in the presence of compression. To assess their real-world
applicability to compressed images, it is critical to evaluate algorithms on rigorous
realistic test cases. We argue that this is infeasible for camera identification using
existing databases, and propose a novel database to close this gap. Using this
database, we investigate the robustness of learning-based camera identification. We
present an approach that significantly outperforms the state-of-the-art in camera
identification, both on clean and strongly compressed images. We further find that
using compression as data augmentation can significantly improve performance on
compressed images, even for completely unknown compression algorithms.
Although first 3-D reconstructions of minute structures were made over 130 years
ago, the need to understand the shape and morphology of various specimens at the
microscopic level has increased significantly in the last decade. Since volumetric
imaging techniques like computed tomography or magnetic resonance imaging are
still limited regarding spatial resolution and availability, an interest in the digital
reconstruction of histological tissue preparations prepared and stained according to
the scientific question has evolved.
One fundamental technique in 3D histology reconstruction is the use of image
registration for alignment of slice images and reversal of slice deformations. Here,
the individual registration components are often chosen to best fit the data at hand.
This thesis uses a rigid transformation model for simple alignment and a non-rigid,
non-parametric image transform for image unwarping. The core principle of the unwarping
strategy is the iterative Gauss-Seidel method, which is capable of selectively
eliminating higher frequency errors from given functions consisting of superimposed
signals of different wavelengths. Evaluation strategies like the Sum of Squared Distances,
Target Registration Error and Graylevel Cooccurrence Matrices are ways to
assess the quality of match between slices and the coherence of reconstructed histology
volumes.
The histological preparation procedure steps like resection, embedding, cutting
and staining significantly destroy the tissue, and the digitization step leads to disambiguities
and information loss regarding the shape of structures. The image data
sets are therefore inherently afflicted with a large number of artifacts like intensity
variances, foldings, deformations or missing parts.
The preprocessing and reconstruction pipeline starts with the gray value conversion
of color images to reduce the computational effort. As one of the two main
contributions of the work a novel intensity standardization for histological image
sequences is proposed. Here, percentile values of individual slice histograms are determined,
and corresponding percentile values treated as list of function values. The
Gauss-Seidel method is used to eliminate the higher frequency intensity variances,
but preserve the lower frequency differences stemming from changing tissue content.
For an entire slice data set, a mean intensity correction of 0.45 and mean standard
deviation of correction of only 5.03 – half of the correction other methods achieved –
demonstrates a moderate adaption of the original intensity values, while qualitative
results show good contrast and smooth appearance. After optional replacement of
defective or missing slice images and possibly manual sorting of the image data set,
a first simple 3D reconstruction is achieved by rigid image registration and stacking.
The remaining nonlinear slice deformations prevent real coherence of the anatomy.
The second main contribution of the work is the adaptation of the Gauss-Seidel
method for image sequences. While the non-rigid image registration process provides
a possibility to emulate the slice deformations and thus to reverse them, the Gauss-
Seidel method determines how these deformations are eliminated by an iterative
update of the individual images, taking into account their local neighborhood. The
method is suitable for large datasets, since it does not require the entire dataset
in memory. Experiments show that the method converges quickly, and experiments
with synthetic data with known deformations suggest that the true deformation of
the slices is effectively reversed, with the Mean Target Registration Error being below
1 pixel offset to the original CT data set after 5 iterations. Visually, the results are
convincing and provide instructive insights into microscopic morphology.
Further improvements of the proposed reconstruction pipeline include a gray level
conversion based on stain protocol or structure-of-interest, better defect slice replacement
methods, or automated methods to restore the correct slice sequence. The intensity
matching method could be adapted based on the tissue content. For large
data sets, the Gauss-Seidel unwarping method should be modified to a multilevel
approach. Possible applications include better anatomical teaching, improvement of
in-vivo 3D imaging protocols or digital pathology.
Die Sonderausstellung "Vom Abakus zu EXASCALE - 50 Jahre Informatik aus Franken", die in 2016 im Museum Industriekultur stattfand nach, war ein voller Erfolg. Allerdings kann im Rahmen einer Ausstellung nicht so viel Information transportiert werden, wie sich die Ausstellungsmacher eigentlich wünschen. Das vorliegende Band soll daher umfassendere Hintergrundinformationen zur Ausstellung, zu den dort behandelten Themen, zur Informatik-Sammlung Erlangen und zu wichtigen Personen der Informatik geben. Dabei wird ein Teil der lokalen und regionalen Geschichte der Informatik, der Computergeschichte, der hier ansässigen Firmen und der Friedrich-Alexander-Universität Erlangen-Nürnberg beleuchtet und ihren Einfluss und Bedeutung herausgestellt.
The increasing availability of process data retrieved from information systems has changed the nature of business process management in organisations. In particular, process analysis has moved from qualitative to evidence-based approaches. Process analytics has emerged as a paradigm for creating value from process data. It supports the mission of business process management to improve business processes by providing, amongst other things, enhanced decision support. To improve a business process, information about its status quo is required, which is provided by descriptive process analytics technologies, such as process mining. More importantly, knowledge about cause-effect relationships between the process, its context and its performance (i.e. its outcome) is essential.
Two major challenges present themselves for the analysis of cause–effect relationships in business processes with process data. First, techniques need to incorporate contextual data (e.g. the sold product or the ordering customer in a sales process) as well as the immediate process layer (e.g. the process sequence or the duration between process activities). Existing knowledge about the cause–effect relationships between these contextual attributes and the process should be input into such techniques to make them context-aware. Second, the output of the techniques should be comprehensible, so users are more willing to accept and subsequently use them. Techniques --- especially those based on machine learning --- should be explainable to allow users to discover new knowledge from the learnt structures between input and output attributes. The output should guide analysts in their quest to find cause–effect relationships between the executed process steps and the process outcome.
In light of these challenges, this doctoral thesis applies a design science research approach to design artefacts that support the analysis of cause–effect relationships from process data. First, the thesis introduces a holistic conceptualisation of the term process analytics. It stresses the importance of both the process of analysis and human and organisational concerns to create value from process data. Most importantly, various techniques are designed for the analysis of process data with machine learning. All of these techniques are instantiated and evaluated with real-life data sets. For one of the techniques, a case study demonstrates its usefulness to process analysts to discover root causes for performance issues. In addition, the thesis presents design principles for producing comprehensible process models derived from process data. Last, a structured literature review discusses explainability in predictive business process monitoring to identify future research needs.
Part A of this dissertation presents a summary introducing the overarching research objective and research questions. Part B consists of seven research papers, six of which have been published in various renowned academic outlets, such as the Decision Support Systems journal, the Information Systems journal and the European Conference on Information Systems.
Ongoing digitization efforts lead to vast amounts of music data, e.g., audio and video recordings, symbolically encoded scores, or graphical sheet music. Accessing this data in a convenient way requires flexible retrieval strategies. One access paradigm is known as “query by example,” where a short music excerpt in a specific representation is given as a query. The task is to automatically retrieve documents from a music database that are similar to the query in certain parts or aspects. This thesis addresses two different cross-version retrieval scenarios of Western classical music, where the aim is to find the database’s audio recordings that are based on the same musical work as the query. Depending on the respective scenario, one requires task-specific audio representations to compare the query and the database documents. Various approaches for learning such audio representations with deep neural networks are proposed, leading to improvements in the efficiency of the search and the quality of the retrieval results.
In the first scenario, the query is a short audio snippet. The retrieval is based on audio shingles, which are short sequences of chroma features capturing properties of the harmonic and melodic content of the audio recordings. The comparison between the query and the recordings from the database is realized by a nearest-neighbor search of the audio shingles. The thesis contains various contributions to increase the efficiency of the retrieval procedure in this scenario. In order to reduce the dimensionality of the shingles, deep-learning-based embedding techniques are used. Furthermore, a graph-based index structure for efficient nearest-neighbor search is applied. These adaptations lead to substantial improvements in terms of runtime and memory requirements.
In the second scenario, a symbolically encoded monophonic musical theme is used as a query. The retrieval is based on a sequence-alignment algorithm relying on chroma-based audio features. This scenario is more challenging than the first one because the query (monophonic symbolic theme) and the database documents (audio recordings of polyphonic music) are fundamentally different from each other. The thesis contains various contributions to improve the retrieval results in this scenario. On the one hand, a novel dataset for musical themes is introduced that is helpful for evaluation purposes and supervised training procedures. On the other hand, various enhanced chroma representations are proposed for the retrieval task. In particular, a novel chroma-feature variant is introduced, where theme-like structures in the musical content of the audio recordings are enhanced by a deep neural network trained with a loss function (CTC) that allows for aligning the themes to the audio recordings during the training procedure. The experiments described in this thesis show that the results of the theme-based retrieval task are substantially improved by using this representation.
A high-performance implementation of a multiphase lattice Boltzmann method based on the conservative Allen-Cahn model supporting high-density ratios and high Reynolds numbers is presented. Meta-programming techniques are used to generate optimized code for CPUs and GPUs automatically. The coupled model is specified in a high-level symbolic description and optimized through automatic transformations. The memory footprint of the resulting algorithm is reduced through the fusion of compute kernels. A roofline analysis demonstrates the excellent efficiency of the generated code on a single GPU. The resulting single GPU code has been integrated into the multiphysics framework waLBerla to run massively parallel simulations on large domains. Communication hiding and GPUDirect-enabled MPI yield near-perfect scaling behavior. Scaling experiments are conducted on the Piz Daint supercomputer with up to 2048 GPUs, simulating several hundred fully resolved bubbles. Further, validation of the implementation is shown in a physically relevant scenario—a three-dimensional rising air bubble in water.
Deep‐learning‐based pipeline for module power prediction from electroluminescense measurements
(2021)
Abstract
Automated inspection plays an important role in monitoring large‐scale photovoltaic power plants. Commonly, electroluminescense measurements are used to identify various types of defects on solar modules, but have not been used to determine the power of a module. However, knowledge of the power at maximum power point is important as well, since drops in the power of a single module can affect the performance of an entire string. By now, this is commonly determined by measurements that require to discontact or even dismount the module, rendering a regular inspection of individual modules infeasible. In this work, we bridge the gap between electroluminescense measurements and the power determination of a module. We compile a large dataset of 719 electroluminescense measurements of modules at various stages of degradation, especially cell cracks and fractures, and the corresponding power at maximum power point. Here, we focus on inactive regions and cracks as the predominant type of defect. We set up a baseline regression model to predict the power from electroluminescense measurements with a mean absolute error (MAE) of 9.0 ± 8.4WP (4.0 ± 3.7%). Then, we show that deep learning can be used to train a model that performs significantly better (7.3 ± 6.5WP or 3.2 ± 2.7%) and propose a variant of class activation maps to obtain the per cell power loss, as predicted by the model. With this work, we aim to open a new research topic. Therefore, we publicly release the dataset, the code, and trained models to empower other researchers to compare against our results. Finally, we present a thorough evaluation of certain boundary conditions like the dataset size and an automated preprocessing pipeline for on‐site measurements showing multiple modules at once.