Doctoral Thesis
Refine
Year of publication
Document Type
- Doctoral Thesis (164) (remove)
Keywords
- integer programming (3)
- mixed-integer programming (3)
- Chip-Verifikation (2)
- Constraint Programmierung (2)
- Cutting Planes (2)
- Ganzzahlige Programmierung (2)
- Perron cluster analysis (2)
- SAT (2)
- column generation (2)
- combinatorial optimization (2)
Institute
- Numerical Mathematics (55)
- Mathematical Optimization (36)
- Visual and Data-centric Computing (34)
- Visual Data Analysis (26)
- Computational Nano Optics (13)
- Modeling and Simulation of Complex Processes (13)
- Distributed Algorithms and Supercomputing (10)
- ZIB Allgemein (10)
- Computational Molecular Design (8)
- Applied Algorithmic Intelligence Methods (6)
This thesis introduces the novel hybrid algorithm DisCOptER for globally optimal flight planning. DisCOptER (Discrete-Continuous Optimization for Enhanced Resolution) com- bines discrete and continuous optimization in a two-stage approach to find optimal trajectories up to arbitrary precision in finite time. In the discrete phase, a directed auxiliary graph is created in order to define a set of candidate paths that densely covers the relevant part of the trajectory space. Then, Yen’s algorithm is employed to identify a set of promising candidate paths. These are used as starting points for the subsequent stage in which they are refined with a locally convergent optimal control method. The correctness, accuracy, and complexity of DisCOptER are intricately linked to the choice of the switch-over point, defined by the discretization coarseness. Only a sufficiently dense graph enables the algorithm to find a path within the convex domain surrounding the global minimizer. Initialized with such a path, the second stage rapidly converges to the optimum. Conversely, an excessively dense graph poses the risk of overly costly and redundant computations. The determination of the optimal switch-over point necessitates a profound understanding of the local behavior of the problem, the approximation properties of the graph, and the convergence characteristics of the employed optimal control method. These topics are explored extensively in this thesis. Crucially, the density of the auxiliary graph is solely dependent on the en- vironmental conditions, yet independent of the desired solution accuracy. As a consequence, the algorithm inherits the superior asymptotic convergence properties of the optimal control stage. The practical implications of this computational efficiency are demonstrated in realistic environments, where the DisCOptER algorithm consistently delivers highly accurate globally optimal trajectories with exceptional computational efficiency. This notable improvement upon existing approaches underscores the algorithm’s significance. Beyond its technical prowess, the DisCOptER algorithm stands as a valuable tool contributing to the reduction of costs and the overall enhancement of flight operations efficiency.
Polymorphism is the property exhibited by many inorganic and organic molecules to crystallize in more than one crystal structure. There is a strong need for understanding the influencing factors on polymorphism, as it is responsible for differences in many physicochemical properties such as stability and solubility. Nearly 80 % of marketed drugs exhibit polymorphism. In this work, we took the model system of paracetamol to investigate the influence of solvent choice on its polymorphism. Different methods were developed and employed to understand the influence of small organic solvents on the crystallization of paracetamol. Non-equilibrium molecular dynamics simulations with periodic simulated annealing were used as a tool to probe the nature of precursors of the metastable intermediates occurring in the crystallization process. Using this method, it was found that the structures of the building blocks of crystals of paracetamol is governed by solvent-solute interactions. In situ Raman spectroscopy was used with a custom-made acoustic levitator to follow crystallization. This set-up is a reliable method for investigating solvent influence, attenuating heterogeneous nucleation and stabilizing other environmental factors. It was established that as a solvent, ethanol is much stronger than methanol in its effect of driving paracetamol solutions to their crystal form. The time-resolved Raman spectroscopy crystallization data was processed using a newly developed objective function based non-negative matrix factorization method (NMF). An orthogonal time-lapse photography was used in conjunction with NMF to get unique and accurate factors that pertain to the spectra and concentrations of different moieties of paracetamol crystallization existing as latent components in the untreated data.
Knee osteoarthritis (KOA) is a degenerative disease that leads to pain and loss of function. It is estimated to affect over 500 million humans world-wide and is one of the most common reasons for disability. KOA is usually diagnosed by radiologists or clinical experts by anamnesis, physical examination, and by assessing medical image data. The latter is typically acquired using X-Ray or magnetic resonance imaging. Since manual image reading is subjective, tedious and time-consuming, automated methods are required for a fast and objective decision support and for a better understanding of the pathogenesis of KOA.
This thesis sets a foundation towards automated computation of image-based KOA biomarkers for holistic assessment of the knee. This involves the assessment of multiple knee bones and soft tissues. An assessment of particular structures requires localization of these tissues. In order to automate a faithful localization of anatomical structures, deep learning-based methods are investigated and utilized. Additionally, convolutional neural networks (CNNs) are used for classification of medical image data, i.e., for a direct determination of the disease status and to detect anatomical structures and landmarks. The automatically computed anatomical volumes, locations, and other measurements are finally compared to values acquired by clinical experts and evaluated for clustering of KOA groups, classification of KOA severity, prediction of KOA progression, and prediction of total knee replacement.
In various experiments it is shown that CNN-based methods are suitable for accurate medical image segmentation, object detection, landmark detection, and direct classification of disease stages from the image data. Computed features related to the menisci are found to be most expressive in terms of clustering of KOA groups and
predicting of future disease states, thus allowing diagnosis of current KOA conditions and prediction of future conditions.
The conclusion of this thesis is that machine learning-based, fully automated processing of medical image data shows potential for diagnosis and prediction of KOA grades. Future studies could investigate additional features in order to achieve an assessment of the whole knee or validate the findings of this work in clinical
studies.
Mixed-Integer Linear Programming (MILP) is a ubiquitous and practical modelling paradigm that is essential for optimising a broad range of real-world systems. The backbone of all modern MILP solvers is the branch-and-cut algorithm, which is a hybrid of the branch-and-bound and cutting planes algorithms. Cutting planes (cuts) are linear inequalities that tighten the relaxation of a MILP. While a lot of research has gone into deriving valid cuts for MILPs, less emphasis has been put on determining which cuts to select. Cuts in general are generated in rounds, and a subset of the generated cuts must be added to the relaxation. The decision on which subset of cuts to add is called cut selection. This is a crucial task since adding too many cuts makes the relaxation large and slow to optimise over. Conversely, adding too few cuts results in an insufficiently tightened relaxation, and more relaxations need to be enumerated. To further emphasise the difficulty, the effectiveness of an applied cut is both dependent on the other applied cuts, and the state of the MILP solver. In this thesis, we present theoretical results on the importance and difficulty of cut selection, as well as practical results that use cut selection to improve general MILP solver performance. Improving general MILP solver performance is of great importance for practitioners and has many runoff effects. Reducing the solve time of currently solved systems can directly improve efficiency within the application area. In addition, improved performance enables larger systems to be modelled and optimised, and MILP to be used in areas where it was previously impractical due to time restrictions.
Each chapter of this thesis corresponds to a publication on cut selection, where the contributions of this thesis can naturally be divided into four components. The first two components are motivated by instance-dependent performance. In practice, for each subroutine, including cut selection, MILP solvers have adjustable parameters with hard-coded default values. It is ultimately unrealistic to expect these default values to perform well for every instance. Rather, it would be ideal if the parameters were dependent on the given instance. To show this motivation is well founded, we first introduce a family of parametric MILP instances and cuts to showcase worst-case performance of cut selection for any fixed parameter value. We then introduce a graph neural network architecture and reinforcement learning framework for learning instance-dependent cut scoring parameters. In the following component, we formalise language for determining if a cut has theoretical usefulness from a polyhedral point of view in relation to other cuts. In addition, to overcome issues of infeasible projections and dual degeneracy, we introduce analytic center based distance measures. We then construct a lightweight multi-output regression model that predicts relative solver performance of an instance for a set of distance measures. The final two components are motivated by general MILP solver improvement via cut selection. Such improvement was shown to be possible, albeit difficult to achieve, by the first half of this thesis. We relate branch-and-bound and cuts through their underlying disjunctions. Using a history of previously computed Gomory mixed-integer cuts, we reduce the solve time of SCIP over the 67% of affected MIPLIB 2017 instances by 4%. In the final component, we introduce new cut scoring measures and filtering methods based on information from other MILP solving processes. The new cut selection techniques reduce the solve time of SCIP over the 97% of affected MIPLIB 2017 instances by 5%.
Riesz-projection-based methods for the numerical simulation of resonance phenomena in nanophotonics
(2023)
Mixed-integer linear programming (MILP) plays a crucial role in the field of mathematical optimization and is especially relevant for practical applications due to the broad range of problems that can be modeled in that fashion. The vast majority of MILP solvers employ the LP-based branch-and-cut approach. As the name suggests, the linear programming (LP) subproblems that need to be solved therein influence their behavior and performance significantly.
This thesis explores the impact of various LP solvers as well as LP solving techniques on the constraint integer programming framework SCIP Optimization Suite. SCIP allows for comparisons between academic and open-source LP solvers like Clp and SoPlex, as well as commercially developed, high-end codes like CPLEX, Gurobi, and Xpress.
We investigate how the overall performance and stability of an MILP solver can be improved by new algorithmic enhancements like LP solution polishing and persistent scaling that we have implemented in the LP solver SoPlex. The former decreases the fractionality of LP solutions by selecting another vertex on the optimal hyperplane of the LP relaxation, exploiting degeneracy. The latter provides better numerical properties for the LP solver throughout the MILP solving process by preserving and extending the initial scaling factors, effectively also improving the overall performance of SCIP. Both enhancement techniques are activated by default in the SCIP Optimization Suite.
Additionally, we provide an analysis of numerical conditions in SCIP through the lens of the LP solver by comparing different measures and how these evolve during the different stages of the solving process. A side effect of our work on this topic was the development of TreeD: a new and convenient way of presenting the search tree interactively and animated in the three-dimensional space. This visualization technique facilitates a better understanding of the MILP solving process of SCIP.
Furthermore, this thesis presents the various algorithmic techniques like the row representation and iterative refinement that are implemented in SoPlex and that distinguish the solver from other simplex-based codes. Although it is often not as performant as its competitors, SoPlex demonstrates the ongoing research efforts in the field of linear programming with the simplex method.
Aside from that, we demonstrate the rapid prototyping of algorithmic ideas and modeling approaches via PySCIPOpt, the Python interface to the SCIP Optimization Suite. This tool allows for convenient access to SCIP's internal data structures from the user-friendly Python programming language to implement custom algorithms and extensions without any prior knowledge of SCIP's programming language C. TreeD is one such example, demonstrating the use of several Python libraries on top of SCIP. PySCIPOpt also provides an intuitive modeling layer to formulate problems directly in the code without having to utilize another modeling language or framework.
All contributions presented in this thesis are readily accessible in source code in SCIP Optimization Suite or as separate projects on the public code-sharing platform GitHub.
Since the beginning of this millennium, the advent of high-throughput methods in numerous fields of the life sciences led to a shift in paradigms. A broad variety of technologies emerged that allow comprehensive quantification of molecules involved in biological processes. Simultaneously, a major increase in data volume has been recorded with these techniques through enhanced instrumentation and other technical advances. By supplying computational methods that automatically process raw data to obtain biological information, the field of bioinformatics plays an increasingly important role in the analysis of the ever-growing mass of data. Computational mass spectrometry in particular, is a bioinformatics field of research which provides means to gather, analyze and visualize data from high-throughput mass spectrometric experiments. For the study of the entirety of proteins in a cell or an environmental sample, even current techniques reach limitations that need to be circumvented by simplifying the samples subjected to the mass spectrometer. These pre-digested (so-called bottom-up) proteomics experiments then pose an even bigger computational burden during analysis since complex ambiguities need to be resolved during protein inference, grouping and quantification. In this thesis, we present several developments in the pursuit of our goal to provide means for a fully automated analysis of complex and large-scale bottom-up proteomics experiments. Firstly, due to prohibitive computational complexities in state-of-the-art Bayesian protein inference techniques, a refined, more stable technique for performing inference on sums of random variables was developed to enable a variation of standard Bayesian inference for the problem. nextflow and part of a set of standardized, well-tested, and community-maintained workflows by the nf-core collective. Our workflow runs on large-scale data with complex experimental designs and allows a one-command analysis of local and publicly available data sets with state-of-the-art accuracy on various high-performance computing environments or the cloud.
Data analysis has become fundamental to our society and comes in multiple facets and approaches. Nevertheless, in research and applications, the focus was primarily on data from Euclidean vector spaces. Consequently, the majority of methods that are applied today are not suited for more general data types. Driven by needs from fields like image processing, (medical) shape analysis, and network analysis, more and more attention has recently been given to data from non-Euclidean spaces---particularly (curved) manifolds. It has led to the field of geometric data analysis whose methods explicitly take the structure (for example, the topology and geometry) of the underlying space into account.
This thesis contributes to the methodology of geometric data analysis by generalizing several fundamental notions from multivariate statistics to manifolds. We thereby focus on two different viewpoints.
First, we use Riemannian structures to derive a novel regression scheme for general manifolds that relies on splines of generalized Bézier curves. It can accurately model non-geodesic relationships, for example, time-dependent trends with saturation effects or cyclic trends. Since Bézier curves can be evaluated with the constructive de Casteljau algorithm, working with data from manifolds of high dimensions (for example, a hundred thousand or more) is feasible. Relying on the regression, we further develop
a hierarchical statistical model for an adequate analysis of longitudinal data in manifolds, and a method to control for confounding variables.
We secondly focus on data that is not only manifold- but even Lie group-valued, which is frequently the case in applications. We can only achieve this by endowing the group with an affine connection structure that is generally not Riemannian. Utilizing it, we derive generalizations of several well-known dissimilarity measures between data distributions that can be used for various tasks, including hypothesis testing. Invariance under data translations is proven, and a connection to continuous distributions is given for one measure.
A further central contribution of this thesis is that it shows use cases for all notions in real-world applications, particularly in problems from shape analysis in medical imaging and archaeology. We can replicate or further quantify several known findings for shape changes of the femur and the right hippocampus under osteoarthritis and Alzheimer's, respectively. Furthermore, in an archaeological application, we obtain new insights into the construction principles of ancient sundials. Last but not least, we use the geometric structure underlying human brain connectomes to predict cognitive scores. Utilizing a sample selection procedure, we obtain state-of-the-art results.
This thesis considers the transient gas network control optimization problem for on-shore pipeline-based transmission networks with numerous gas routing options.
As input, the problem is given the network's topology, its initial state, and future demands at the boundaries of the network, which prescribe the gas flow exchange and potentially the pressure values.
The task is to find a set of future control measures for all the active, i.e., controllable, elements in the network that minimizes a combination of different penalty functions.
The problem is examined in the context of a decision support tool for gas network dispatchers.
This results in detailed models featuring a diverse set of constraints, large and challenging real-world instances, and demanding time limit requirements.
All these factors further complicate the problem, which is already difficult to solve in theory due to the inherent combination of non-linear and combinatorial aspects.
Our contributions concern different steps of the process of solving the problem.
Regarding the model formulation, we investigate the validity of two common approximations of the gas flow description in transport pipes: neglecting the inertia term and assuming a friction term that linearly depends on the gas flow and the pressure.
For both, we examine if they can be applied under real-world conditions by evaluating a large amount of historical state data of the network of our project partner, the gas network operator Open Grid Europe.
While we can confirm that it is reasonable to ignore the influence of the inertia term, the friction term linearization leads to significant errors and, as a consequence, cannot be used for describing the general gas flow behavior in transport pipes.
As another topic of this thesis, we introduce the target value concept as a more realistic approach to express control actions of dispatchers regarding regulators and compressor stations.
Here, we derive the mechanisms defined for target values based on the gas flow principles in pipes and develop a mixed-integer programming model capturing their behavior.
The accuracy of this model is demonstrated in comparison to a target-value-based industry-standard simulator.
Furthermore, we present two heuristics for the transient gas network control optimization problem featuring target values that are based on approximative models for the target-value-based control and determine the final decisions in a post-processing step.
To compare the performance of the two heuristics with the approach of directly solving the corresponding model, we evaluate them on a set of artificially created test instances.
Finally, we develop problem-specific algorithms for two variants of the described problem.
One considers the control optimization for a single network station, which represents a local operation site featuring a large number of active elements.
The used transient model is very detailed and includes a sophisticated representation of the compressor stations.
Based on the shortness of the pipes in the station, the corresponding algorithm finds valid solutions by solving a series of stationary model variants as well as a transient rolling horizon approach.
As the second variant, we consider the problem on the entire network but assume an approximative model representing the control capabilities of network stations.
Aside from a new description of the compression capabilities, we introduce an algorithm that uses a combination of sequential mixed-integer programming, two heuristics based on reduced time horizons, and a specialized dynamic branch-and-bound node limit to determine promising values for the binary variables of the model.
Complete solutions for the problem are obtained by fixing the binary values and solving the remaining non-linear program.
Both algorithms are investigated in extensive empirical studies based on real-world instances of the corresponding model variants.
Our faces and facial expressions are an important means of communication and social interaction. One goal of the behavioral sciences is to better understand how the features of the faces that we look at influence our behavior. These include static features like facial proportions or the shape and color of certain parts of a face which primarily constitute facial identity, as well as dynamic movements resulting from the activation of the mimic musculature. Experimental psychology provides an empirical approach to this endeavor. In experiments, participants are typically exposed to images or videos of realistic faces with specifically controlled features. By analysis of the reactions to such stimuli, conclusions can be drawn about the influence of facial features on the participants’ behavior.
Psychologists today mostly generate face stimuli with the help of digital tools. Image editing with Photoshop is highly flexible, but also time-consuming and subjective. Using tools like Psychomorph or Fantamorph is easier and more objective, but does not allow specific control over facial features. In contrast, stimulus generation with 3D Morphable Face Models (3DMMs) offers a better balance between objectivity, ease of use, and flexibility. 3DMMs are statistical models which have been determined from 3D scans of real people’s faces and facial expressions. After these training scans have been brought into correspondence, methods like principal component analysis (PCA) can be used to determine the major modes of variation of facial shape and texture in the data. Such modes typically vary the overall facial proportions, expressions, or skin color. They can be individually controlled and flexibly combined to generate new faces and facial expressions. The plausibility of the generated faces can be ensured by having the mode combinations follow the multivariate distribution of the training data.
3DMMs have been mostly used by psychologists for the generation of stimulus images of faces with neutral expression. Static and dynamic stimuli of facial expressions are also of great interest, but generation with 3DMMs is less common. A problem is that the majority
of current 3DMMs can only generate facial movements according to the six prototypic expressions of anger, disgust, fear, happiness, sadness, and surprise. More diverse or subtle expressions are often impossible. Among other reasons, this is due to the difficulty in establishing accurate correspondence in the training data. Further, the modes of most 3DMMs were created by means of PCA. These modes often lack interpretability, fail to generate facial details, and rarely provide psychologists a specific control over identity
or expression features. Some 3DMMs also generate subtle artifacts that might lead to undesired effects during face perception. They are also less realistic than faces which were designed by artistic experts for recent computer games and animated movies. Last but not least, current 3DMMs have probably not yet been used for interactive experiments in virtual reality (VR) for technical reasons.
Although they provide many advantages also beyond the generation of static or dynamic stimuli, the limitations of current 3DMMs have so far prevented a widespread usage in experimental psychology. The goal of this dissertation is to foster the creation and usage of 3DMMs in this context. To this end, we make three major contributions.
First, we describe a matching method that establishes correspondence for 3D face scans with a very high accuracy. Unlike the most commonly used methods, it transforms the facial features into a 2D intermediate representation so that they can be aligned to a reference using image registration. We perform experiments with a large database of 3D scans of faces and facial expressions showing that our method outperforms previous approaches.
Second, the 3D scans which were previously brought into correspondence are used for the creation of a 3DMM whose resolution is an order of magnitude higher than that of most existing models. We learn a variety of meaningful modes that, e.g., vary features only in specific regions of the face, or that are related to demographic factors such as ethnicity and age. Further, modes of local facial movements are established that can be flexibly combined into a large variety of expressions. We evaluate the quality of the newly created 3DMM in two experiments. Our results show its advantages over previous models, especially the higher degree of realism of dynamic stimuli of facial expressions which were created with our model.
Third, we demonstrate that 3DMMs can not only be used for the generation of stimuli. We develop two experimental methods that are readily applicable in experimental psychology. Initially, we create 3D avatar faces with our 3DMM that are readily applicable in VR.
They are used in a new open source framework for virtual mirror experiments on self-face perception. A study is conducted which demonstrates the advantages of the framework over previous methods. Furthermore, our 3DMM is used to create a method for improved control of facial asymmetry in existing stimulus photographs. We show that the method accounts for different dimensions of facial asymmetry and is less sensitive than previous approaches to extrinsic factors like the posture of the head. The different methods are evaluated in a study investigating the influence of facial asymmetry on ratings of attractiveness, femininity, and masculinity. The results indicate the benefits and validity of our method.
Surgical interventions are becoming increasingly complex thanks to modern assistance systems (imaging, robotics, etc.). Minimally invasive surgery in particular places high demands on surgeons due to added surgical complexity and information overload. Therefore, there is a growing need of developing context-aware systems that recognize the current surgical situation in order to derive and present the relevant information to the surgical staff for assistance. Current approaches for deriving contextual cues either utilize specialized hardware that is disruptive to the surgical workflow, or utilize vision-based approaches that require valuable time of surgeons, especially for manual annotations.
The main objective of this cumulative dissertation is to improve the existing approaches for three important sub-problems of vision-based context-aware systems, namely surgical phase recognition, surgical instrument recognition and surgical instrument segmentation, while tackling the vision and manual annotation challenges related to these problems.
This dissertation demonstrates that vision-based approaches for the three named clinical sub-problems of context-aware systems can be developed in an annotation-scarce setting by employing: domain-specific, deep learning based transfer learning techniques for the surgical instrument and phase recognition tasks; and deep learning based simulation-to-real unsupervised domain adaptation techniques for the surgical instrument segmentation task. The efficacy and real-time performance of the developed approaches have been evaluated on publicly available datasets containing real surgical videos (laparoscopic procedures) that were acquired in an uncontrolled surgical environment.
These proposed approaches advance the state-of-the-art for the aforementioned research problems of context-aware systems in the OR and can potentially be utilized for real-time notification of the surgical phase, surgical instrument usage and image-based localization of surgical instruments.
The fight against climate change makes extreme but inevitable changes in the energy sector necessary. These in turn lead to novel and complex challenges for the transmission system operators (TSOs) of gas transport networks. In this thesis, we consider four different planning problems emerging from real-world operations and present mathematical programming models and solution approaches for all of them.
Due to regulatory requirements and side effects of renewable energy production, controlling today's gas networks with their involved topologies is becoming increasingly difficult. Based on the network station modeling concept for approximating the technical capabilities of complex subnetworks, e.g., compressor stations, we introduce a tri-level MIP model to determine important global control decisions. Its goal is to avoid changes in the network elements' settings while deviations from future inflow pressures as well as supplies and demands are minimized. A sequential linear programming inspired post-processing routine is run to derive physically accurate solutions w.r.t. the transient gas flow in pipelines. Computational experiments based on real-world data show that meaningful solutions are quickly and reliably determined. Therefore, the algorithmic approach is used within KOMPASS, a decision support system for the transient network control that we developed together with the Open Grid Europe GmbH (OGE), one of Europe's largest natural gas TSOs.
Anticipating future use cases, we adapt the aforementioned algorithmic approach for hydrogen transport. We investigate whether the natural gas infrastructure can be repurposed and how the network control changes when energy-equivalent amounts of hydrogen are transported. Besides proving the need for purpose-built compressors, we observe that, due to the reduced linepack, the network control becomes more dynamic, compression energy increases by 440% on average, and stricter regulatory rules regarding the balancing of supply and demand become necessary.
Extreme load flows expose the technical limits of gas networks and are therefore of great importance to the TSOs. In this context, we introduce the Maximum Transportation Problem and the Maximum Potential Transport Moment Problem to determine severe transport scenarios. Both can be modeled as linear bilevel programs where the leader selects supplies and demands, maximizing the follower's transport effort. To solve them, we identify solution-equivalent instances with acyclic networks, provide variable bounds regarding their KKT reformulations, apply the big-M technique, and solve the resulting MIPs. A case study shows that the obtained scenarios exceed the maximum severity values of a provided test set by at least 23%.
OGE's transmission system is 11,540km long. Monitoring it is crucial for safe operations. To this end, we discuss the idea of using uncrewed aerial vehicles and introduce the Length-Constrained Cycle Partition Problem to optimize their routing. Its goal is to find a smallest cycle partition satisfying vertex-induced length requirements. Besides a greedy-style heuristic, we propose two MIP models. Combining them with symmetry-breaking constraints as well as valid inequalities and lower bounds from conflict hypergraphs yields a highly performant solution algorithm for this class of problems.
Simulation of Piecewise Smooth Differential Algebraic Equations with Application to Gas Networks
(2022)
The electric conductivity of cardiac tissue determines excitation propagation and
is vital for quantifying ischemia and scar tissue and building personalized models. As scar tissue is generally characterized by different conduction of electrical excitation, we aim to estimate conductivity-related parameters in mathematical excitation models from endocardial mapping data, particularly the anisotropic conductivity tensor in the monodomain equation, which describes the cardiac excitation. Yet, estimating the distributed and anisotropic conductivity tensors reliably and efficiently from endocardial mapping data or electrocardiograms is a challenging inverse problem due to the computational complexity of the monodomain equation; Many expensive high-resolution computations for the monodomain equation on very fine space and time discretizations are involved.
Thus, we aim at building an efficient multilevel method for accelerating the estimation procedure combining electrophysiology models of different complex-
ity, which uses a computationally cheap eikonal model in addition to the more accurate monodomain model. Distributed parameter estimation, well-known as
an ill-posed inverse problem, can be performed by minimizing the misfit between simulated and measured electrical activity on the endocardial surface subject to
the monodomain model and some regularization, leading to a partial differential equation constrained optimization problem. We formulate this optimization
problem, including scar tissue modeling and different regularizations, and design an efficient iterative solver. To this aim, we consider monodomain grid hi-
erarchies, monodomain-eikonal model hierarchies, and the combination of both hierarchies in a recursive multilevel trust-region (RMTR) method.
On the one hand, both the trust region method’s estimation quality and efficiency, independent of the data, are investigated from several numerical exam-
ples. Endocardial mapping data of realistic density appears to be sufficient to provide quantitatively reasonable estimates of the location, size, and shape of
scars close to the endocardial surface. In several situations, scar reconstruction based on eikonal and monodomain models differ significantly, suggesting the use of the more involved monodomain model for this purpose. Moreover, Eikonal models can accelerate the computations considerably, enabling the use of complex electrophysiology models for estimating myocardial scars from endocardial mapping data. In many situations, eikonal models approximate monodomain models well but are orders of magnitude faster to solve. Thus, eikonal models can utilize them to provide an RMTR acceleration with negligible overhead per iteration, resulting in a practical approach to estimating myocardial scars from endocardial mapping data. In addition, the multilevel solver is faster than a comparable single-level solver.
On the other hand, we investigate different optimization approaches based on adjoint gradient computation for computing a maximum posterior estimate:
steepest descent, limited memory BFGS, and recursive multilevel trust region methods using mesh hierarchies or heterogeneous model hierarchies. We compare overall performance, asymptotic convergence rate, and pre-asymptotic progress on selected examples in order to assess the benefit of our multifidelity acceleration.
Interpretable Deep Learning Approaches for Biomarker Detection from High-Dimensional Biomedical Data
(2022)
In this work, we address the challenge of developing statistical shape models that account for the non-Euclidean nature inherent to (anatomical) shape variation and at the same time offer fast, numerically robust processing and as much invariance as possible regarding translation and rotation, i.e. Euclidean motion.
With the aim of doing that we formulate a continuous and physically motivated notion of shape space based on deformation gradients. We follow two different tracks endowing this differential representation with a Riemannian structure to establish a statistical shape model. (1) We derive a model based on differential coordinates as elements in GL(3)+. To this end, we adapt the notion of bi-invariant means employing an affine connection structure on GL(3)+. Furthermore, we perform second-order statistics based on a family of Riemannian metrics providing the most possible invariance, viz. GL(3)+-left-invariance and O(3)-right-invariance. (2) We endow the differential coordinates with a non-Euclidean structure, that stems from a product Lie group of stretches and rotations. This structure admits a bi-invariant metric and thus allows for a consistent analysis via manifold-valued Riemannian statistics. This work further presents a novel shape representation based on discrete fundamental forms that is naturally invariant under Euclidean motion, namely the fundamental coordinates. We endow this representation with a Lie group structure that admits bi-invariant metrics and therefore allows for consistent analysis using manifold-valued statistics based on the Riemannian framework. Furthermore, we derive a simple, efficient, robust, yet accurate (i.e. without resorting to model approximations) solver for the inverse problem that allows for interactive applications. Beyond statistical shape modeling the proposed framework is amenable for surface processing such as quasi-isometric flattening. Additionally, the last part of the thesis aims on shape-based, continuous disease stratification to provide means that objectify disease assessment over the current clinical practice of ordinal grading systems. Therefore, we derive the geodesic B-score, a generalization of the of the Euclidean B-score, in order to assess knee osteoarthritis. In this context we present a Newton-type fixed point iteration for projection onto geodesics in shape space. On the application side, we show that the derived geodesic B-score features, in comparison to its Euclidean counterpart, an improved predictive performance on assessing the risk of total knee replacement surgery.
The interesting dynamical regimes in agent-based models (ABMs) of social dynamics are the transient dynamics leading to metastable or absorbing states, and the transition paths between metastable states possibly caused by external influences. In this thesis, we are particularly interested in the pathways of rare and critical transitions such as the tipping of the public opinion in a population or the forming of social movements. For a detailed quantitative analysis of these transition paths, we consider the agent-based models as Markov chains and employ Transition Path Theory. Since ABMs are usually not considered in stationarity and possibly even forced, we generalize Transition Path Theory to time-dependent dynamics, for example on finite-time intervals or with periodically varying transition probabilities. We also specifically consider the case of dynamics with absorbing states and show how the transitions prior to absorption can be studied. These generalizations can also be useful in other application domains such as for studying tipping in climate models or transitions in molecular models with external stimuli. Another obstacle when analysing the dynamics of agent-based models is the large number of agents resulting in a high-dimensional state space for the model. However, the emergent dynamics of the ABM usually has significantly fewer degrees of freedom and many symmetries enabling a model reduction. On the example of two stationary ABMs we demonstrate how a long model simulation can be employed to find a lower-dimensional parametrization of the state space using a manifold learning algorithm called Diffusion Maps. In the considered models, agents adapt their binary behaviour to the local neighbourhood. When the interaction network consists of several densely connected communities, the dynamics result in a largely coherent behaviour in each community. The low-dimensional structure of the state space is therefore a hypercube. The corners represent metastable states with coherent agent behaviour in each group and the edges correspond to transition paths where agents in a community change their behaviour through a chain reaction. Finally, we can apply Transition Path Theory to the effective dynamics in the reduced space to reveal, for example, the dominant transition paths or the agents that are most indicative of an impending tipping event.