Refine
Year of publication
Document Type
- Article (64)
- ZIB-Report (52)
- In Proceedings (19)
- In Collection (9)
- Book (4)
- Book chapter (3)
- Report (1)
- Software (1)
Is part of the Bibliography
- no (153)
Keywords
- optimal control (18)
- interior point methods in function space (5)
- interior point methods (4)
- trajectory storage (4)
- discretization error (3)
- finite elements (3)
- flight planning (3)
- free flight (3)
- lossy compression (3)
- shortest path (3)
Institute
- Numerical Mathematics (125)
- Computational Medicine (101)
- Modeling and Simulation of Complex Processes (33)
- Visual Data Analysis (12)
- Visual and Data-centric Computing (12)
- Mathematics for Life and Materials Science (8)
- Network Optimization (8)
- Therapy Planning (8)
- ZIB Allgemein (7)
- Distributed Algorithms and Supercomputing (3)
Solving PDEs on unstructured grids is a cornerstone of engineering and scientific computing. Heterogeneous parallel platforms, including CPUs, GPUs, and FPGAs, enable energy-efficient and computationally demanding simulations.
In this article, we introduce the HPM C++-embedded DSL that bridges the abstraction gap between the mathematical formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different programming models on the other hand.
Thus, the HPM DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HPM DSL, and demonstrate its usage with three examples. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters, is presented.
A code generator and a matching back end allow the acceleration of HPM code with GPUs. Finally, the achievable performance and scalability are demonstrated for different example problems.
Solving partial differential equations on unstructured grids is a cornerstone of engineering and scientific computing. Nowadays, heterogeneous parallel platforms with CPUs, GPUs, and FPGAs enable energy-efficient and computationally demanding simulations. We developed the HighPerMeshes C++-embedded Domain-Specific Language (DSL) for bridging the abstraction gap between the mathematical and algorithmic formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different parallel programming and runtime models on the other hand. Thus, the HighPerMeshes DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HighPer-Meshes DSL, and demonstrate its usage with three examples, a Poisson and monodomain problem, respectively, solved by the continuous finite element method, and the discontinuous Galerkin method for Maxwell’s equation. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters is presented. Finally, the achievable performance and scalability are demonstrated for a typical example problem on a multi-core CPU cluster.
Geometric predicates are at the core of many algorithms, such as the construction of Delaunay triangulations, mesh processing and spatial relation tests.
These algorithms have applications in scientific computing, geographic information systems and computer-aided design.
With floating-point arithmetic, these geometric predicates can incur round-off errors that may lead to incorrect results and inconsistencies, causing computations to fail.
This issue has been addressed using a combination of exact arithmetic for robustness and floating-point filters to mitigate the computational cost of exact computations.
The implementation of exact computations and floating-point filters can be a difficult task, and code generation tools have been proposed to address this.
We present a new C++ meta-programming framework for the generation of fast, robust predicates for arbitrary geometric predicates based on polynomial expressions.
We combine and extend different approaches to filtering, branch reduction, and overflow avoidance that have previously been proposed.
We show examples of how this approach produces correct results for data sets that could lead to incorrect predicate results with naive implementations.
Our benchmark results demonstrate that our implementation surpasses state-of-the-art implementations.
Flight planning, the computation of optimal routes in view of flight time and fuel consumption under given weather conditions, is traditionally done by finding globally shortest paths in a predefined airway network. Free flight trajectories, not restricted to a network, have the potential to reduce the costs significantly, and can be computed using locally convergent continuous optimal control methods.
Hybrid methods that start with a discrete global search and refine with a fast continuous local optimization combine the best properties of both approaches, but rely on a good switchover, which requires error estimates for discrete paths relative to continuous trajectories.
Based on vertex density and local complete connectivity, we derive localized and a priori bounds for the flight time of discrete paths relative to the optimal continuous trajectory, and illustrate their properties on a set of benchmark problems. It turns out that localization improves the error bound by four orders of magnitude, but still leaves ample opportunities for tighter bounds using a posteriori error estimators.
We propose a hybrid discrete-continuous algorithm for flight planning in free flight airspaces. In a first step, our DisCOptER method discrete-continuous optimization for enhanced resolution) computes a globally optimal approximate flight path on a discretization of the problem using the A* method. This route initializes a Newton method that converges rapidly to the smooth optimum in a second step. The correctness, accuracy, and complexity of the method are goverened by the choice of the crossover point that determines the coarseness of the discretization. We analyze the optimal choice of the crossover point and demonstrate the asymtotic superority of DisCOptER over a purely discrete approach.
We present an efficient algorithm that finds a globally optimal solution to the 2D Free Flight Trajectory Optimization Problem (aka Zermelo Navigation Problem) up to arbitrary precision in finite time. The algorithm combines a discrete and a continuous optimization phase. In the discrete phase, a set of candidate paths that densely covers the trajectory space is created on a directed auxiliary graph. Then Yen’s algorithm provides a promising set of discrete candidate paths which subsequently undergo a locally convergent refinement stage. Provided that the auxiliary graph is sufficiently dense, the method finds a path that lies within the convex domain around the global minimizer. From this starting point, the second stage will converge rapidly to the optimum. The density of the auxiliary graph depends solely on the wind field, and not on the accuracy of the
solution, such that the method inherits the superior asymptotic convergence properties of the optimal control stage.
Globally optimal free flight trajectory optimization can be achieved with a combination of discrete and continuous optimization. A key requirement is that Newton's method for continuous optimization converges in a sufficiently large neighborhood around a minimizer. We show in this paper that, under certain assumptions, this is the case.
We propose a hybrid discrete-continuous algorithm for flight planning in free flight airspaces. In a first step, our DisCOptER method discrete-continuous optimization for enhanced resolution) computes a globally optimal approximate flight path on a discretization of the problem using the A* method. This route initializes a Newton method that converges rapidly to the smooth optimum in a second step. The correctness, accuracy, and complexity of the method are goverened by the choice of the crossover point that determines the coarseness of the discretization. We analyze the optimal choice of the crossover point and demonstrate the asymtotic superority of DisCOptER over a purely discrete approach.
Globally optimal free flight trajectory optimization can be achieved with a combination of discrete and continuous optimization. A key requirement is that Newton's method for continuous optimization converges in a sufficiently large neighborhood around a minimizer. We show in this paper that, under certain assumptions, this is the case.
Convergence Properties of Newton's Method for Globally Optimal Free Flight Trajectory Optimization
(2023)
The algorithmic efficiency of Newton-based methods for Free Flight Trajectory Optimization is heavily influenced by the size of the domain of convergence. We provide numerical evidence that the convergence radius is much larger in practice than what the theoretical worst case bounds suggest. The algorithm can be further improved by a convergence-enhancing domain decomposition.
Convergence Properties of Newton’s Method for Globally Optimal Free Flight Trajectory Optimization
(2023)
The algorithmic efficiency of Newton-based methods for Free Flight Trajectory Optimization is heavily influenced by the size of the domain of convergence. We provide numerical evidence that the convergence radius is much larger in practice than what the theoretical worst case bounds suggest. The algorithm can be further improved by a convergence-enhancing domain decomposition.
CINDy: Conditional gradient-based Identification of Non-linear Dynamics – Noise-robust recovery
(2021)
Governing equations are essential to the study of nonlinear dynamics, often enabling the prediction of previously unseen behaviors as well as the inclusion into control strategies. The discovery of governing equations from data thus has the potential to transform data-rich fields where well-established dynamical models remain unknown. This work contributes to the recent trend in data-driven sparse identification of nonlinear dynamics of finding the best sparse fit to observational data in a large library of potential nonlinear models. We propose an efficient first-order Conditional Gradient algorithm for solving the underlying optimization problem. In comparison to the most prominent alternative algorithms, the new algorithm shows significantly improved performance on several essential issues like sparsity-induction, structure-preservation, noise robustness, and sample efficiency. We demonstrate these advantages on several dynamics from the field of synchronization, particle dynamics, and enzyme chemistry.
Efficient numerical methods for simulating cardiac electrophysiology with cellular resolution
(2023)
The cardiac extracellular-membrane-intracellular (EMI) model enables the precise geometrical representation and resolution of aggregates of individual myocytes. As a result, it not only yields more accurate simulations of cardiac excitation compared to homogenized models but also presents the challenge of solving much larger problems. In this paper, we introduce recent advancements in three key areas: (i) the creation of artificial, yet realistic grids, (ii) efficient higher-order time stepping achieved by combining low-overhead spatial adaptivity on the algebraic level with progressive spectral deferred correction methods, and (iii) substructuring domain decomposition preconditioners tailored to address the complexities of heterogeneous problem structures. The efficiency gains of these proposed methods are demonstrated through numerical results on cardiac meshes of different sizes.
Aims. Detection and quantification of myocardial scars are helpful both for diagnosis of heart diseases and for building personalized simulation models. Scar tissue is generally characterized by a different conduction of electrical excitation. We aim at estimating conductivity-related parameters from endocardial mapping data, in particular the conductivity tensor. Solving this inverse problem requires computationally expensive monodomain simulations on fine discretizations. Therefore, we aim at accelerating the estimation using a multilevel method combining electrophysiology models of different complexity, namely the monodomain and the eikonal model.
Methods. Distributed parameter estimation is performed by minimizing the misfit between simulated and measured electrical activity on the endocardial surface, subject to the monodomain model and regularization, leading to a constrained optimization problem. We formulate this optimization problem, including the modeling of scar tissue and different regularizations, and design an efficient iterative solver. We consider monodomain grid hierarchies and monodomain-eikonal model hierarchies in a recursive multilevel trust-region method.
Results. From several numerical examples, both the efficiency of the method and the estimation quality, depending on the data, are investigated. The multilevel solver is significantly faster than a comparable single level solver. Endocardial mapping data of realistic density appears to be just sufficient to provide quantitatively reasonable estimates of location, size, and shape of scars close to the endocardial surface.
Conclusion. In several situations, scar reconstruction based on eikonal and monodomain models differ significantly, suggesting the use of the more accurate but more expensive monodomain model for this purpose. Still, eikonal models can be utilized to accelerate the computations considerably, enabling the use of complex electrophysiology models for estimating myocardial scars from endocardial mapping data.
The electric conductivity of cardiac tissue determines excitation propagation and is important for quantifying ischemia and scar tissue and for building personalized models.
Estimating conductivity distributions from endocardial mapping data is a challenging inverse problem due to the computational complexity of the monodomain equation, which describes the cardiac excitation.
For computing a maximum posterior estimate, we investigate different optimization approaches based on adjoint gradient computation: steepest descent, limited memory BFGS, and recursive multilevel trust region methods, which are using mesh hierarchies or heterogeneous model hierarchies. We compare overall performance, asymptotic convergence rate, and pre-asymptotic progress on selected examples in order to assess the benefit of our multifidelity acceleration.
The locality of solution features in cardiac electrophysiology simulations calls for adaptive methods. Due to the overhead incurred by established mesh refinement and coarsening, however, such approaches failed in accelerating the computations. Here we investigate a different route to spatial adaptivity that is based on nested subset selection for algebraic degrees of freedom in spectral deferred correction methods. This combination of algebraic adaptivity and iterative solvers for higher order collocation time stepping realizes a multirate integration with minimal overhead. This leads to moderate but significant speedups in both monodomain and cell-by-cell models of cardiac excitation, as demonstrated at four numerical examples.
The paper deals with three different Newton algorithms that have recently been worked out in the general frame of affine invariance. Of particular interest is their performance in the numerical solution of discretized boundary value problems (BVPs) for nonlinear partial differential equations (PDEs). Exact Newton methods, where the arising linear systems are solved by direct elimination, and inexact Newton methods, where an inner iteration is used instead, are synoptically presented, both in affine invariant convergence theory and in numerical experiments. The three types of algorithms are: (a) affine covariant (formerly just called affine invariant) Newton algorithms, oriented toward the iterative errors, (b) affine contravariant Newton algorithms, based on iterative residual norms, and (c) affine conjugate Newton algorithms for convex optimization problems and discrete nonlinear elliptic PDEs.
This paper surveys the required mathematics for a typical challenging problem from computational medicine, the cancer therapy planning in deep regional hyperthermia. In the course of many years of close cooperation with clinics, the medical problem gave rise to quite a number of subtle mathematical problems, part of which had been unsolved when the common project started. Efficiency of numerical algorithms, i.e. computational speed and monitored reliability, play a decisive role for the medical treatment. Off-the-shelf software had turned out to be not sufficient to meet the requirements of medicine. Rather, new mathematical theory as well as new numerical algorithms had to be developed. In order to make our algorithms useful in the clinical environment, new visualization software, a virtual lab, including 3D geometry processing of individual virtual patients had to be designed and implemented. Moreover, before the problems could be attacked by numerical algorithms, careful mathematical modelling had to be done. Finally, parameter identification and constrained optimization for the PDEs had to be newly analyzed and realized over the individual patient's geometry. Our new techniques had an impact on the specificity of the individual patients' treatment and on the construction of an improved hyperthermia applicator.
This paper surveys the required mathematics for a typical challenging problem from computational medicine, the cancer therapy planning in deep regional hyperthermia. In the course of many years of close cooperation with clinics, the medical problem gave rise to quite a number of subtle mathematical problems, part of which had been unsolved when the common project started. Efficiency of numerical algorithms, i.e. computational speed and monitored reliability, play a decisive role for the medical treatment. Off-the-shelf software had turned out to be not sufficient to meet the requirements of medicine. Rather, new mathematical theory as well as new numerical algorithms had to be developed. In order to make our algorithms useful in the clinical environment, new visualization software, a virtual lab, including 3D geometry processing of individual virtual patients had to be designed and implemented. Moreover, before the problems could be attacked by numerical algorithms, careful mathematical modelling had to be done. Finally, parameter identification and constrained optimization for the PDEs had to be newly analyzed and realized over the individual patient's geometry. Our new techniques had an impact on the specificity of the individual patients' treatment and on the construction of an improved hyperthermia applicator.
The finite element setting for nonlinear elliptic PDEs directly leads to the minimization of convex functionals. Uniform ellipticity of the underlying PDE shows up as strict convexity of the arising nonlinear functional. The paper analyzes computational variants of Newton's method for convex optimization in an affine conjugate setting, which reflects the appropriate affine transformation behavior for this class of problems. First, an affine conjugate Newton--Mysovskikh type theorem on the local quadratic convergence of the exact Newton method in Hilbert spaces is given. It can be easily extended to inexact Newton methods, where the inner iteration is only approximately solved. For fixed finite dimension, a special implementation of a Newton--PCG algorithm is worked out. In this case, the suggested monitor for the inner iteration guarantees quadratic convergence of the outer iteration. In infinite dimensional problems, the PCG method may be just formally replaced by any Galerkin method such as FEM for linear elliptic problems. Instead of the algebraic inner iteration errors we now have to control the FE discretization errors, which is a standard task performed within any adaptive multilevel method. A careful study of the information gain per computational effort leads to the result that the quadratic convergence mode of the Newton--Galerkin algorithm is the best mode for the fixed dimensional case, whereas for an adaptive variable dimensional code a special linear convergence mode of the algorithm is definitely preferable. The theoretical results are then illustrated by numerical experiments with a {\sf NEWTON--KASKADE} algorithm.
The paper deals with the multilevel solution of {\em elliptic} partial differential equations (PDEs) in a {\em finite element} setting: {\em uniform ellipticity} of the PDE then goes with {\em strict monotonicity} of the derivative of a nonlinear convex functional. A {\em Newton multigrid method} is advocated, wherein {\em linear residuals} are evaluated within the multigrid method for the computation of the Newton corrections. The globalization is performed by some {\em damping} of the ordinary Newton corrections. The convergence results and the algorithm may be regarded as an extension of those for local Newton methods presented recently by the authors. An {\em affine conjugate} global convergence theory is given, which covers both the {\em exact} Newton method (neglecting the occurrence of approximation errors) and {\em inexact} Newton--Galerkin methods addressing the crucial issue of accuracy matching between discretization and iteration errors. The obtained theoretical results are directly applied for the construction of adaptive algorithms. Finally, illustrative numerical experiments with a~{\sf NEWTON--KASKADE} code are documented.
Numerische Mathematik 3
(2011)
In the clinical cancer therapy of regional hyperthermia nonlinear perfusion effects inside and outside the tumor seem to play a not negligible role. A stationary model of such effects leads to a nonlinear Helmholtz term within an elliptic boundary value problem. The present paper reports about the application of a recently designed adaptive multilevel FEM to this problem. For several 3D virtual patients, nonlinear versus linear model is studied. Moreover, the numerical efficiency of the new algorithm is compared with a former application of an adaptive FEM to the corresponding instationary model PDE.
Recently developed Concentric Tube Continuum Robots (CTCRs) are widely exploited in, for example in minimally invasive surgeries which involve navigating inside narrow body cavities close to sensitive regions. These CTCRs can be controlled by extending and rotating the tubes in order to reach a target point or perform some task. The robot must deviate as little as possible from this narrow space and avoid damaging neighbouring tissue. We consider \emph{open-loop} optimal control of CTCRs parameterized over pseudo-time, primarily aiming at minimizing the robot's working volume during its motion. External loads acting on the system like tip loads or contact with tissues are not considered here. We also discussed the inclusion of tip's orientation in the optimal framework to perform some tasks. We recall a quaternion-based formulation of the robot configuration, discuss discretization, develop optimization objectives addressing different criteria, and investigate their impact on robot path planning for several numerical examples. This optimal framework can be applied to any backbone based continuum robots.
Parallel in time methods for solving initial value problems are a means to increase the parallelism of numerical simulations. Hybrid parareal schemes interleaving the parallel in time iteration with an iterative solution of the individual time steps are among the most efficient methods for general nonlinear problems. Despite the hiding of communication time behind computation, communication has in certain situations a significant impact on the total runtime. Here we present strict, yet no sharp, error bounds for hybrid parareal methods with inexact communication due to lossy data compression, and derive theoretical estimates of the impact of compression on parallel efficiency of the algorithms. These and some computational experiments suggest that compression is a viable method to make hybrid parareal schemes robust with respect to low bandwidth setups.
Parallel in time methods for solving initial value problems are a means to increase the parallelism of numerical simulations. Hybrid parareal schemes interleaving the parallel in time iteration with an iterative solution of the individual time steps are among the most efficient methods for general nonlinear problems. Despite the hiding of communication time behind computation, communication has in certain situations a significant impact on the total runtime. Here we present strict, yet no sharp, error bounds for hybrid parareal methods with inexact communication due to lossy data compression, and derive theoretical estimates of the impact of compression on parallel efficiency of the algorithms. These and some computational experiments suggest that compression is a viable method to make hybrid parareal schemes robust with respect to low bandwidth setups.
On the Accuracy of Eikonal Approximations in Cardiac Electrophysiology in the Presence of Fibrosis
(2023)
Fibrotic tissue is one of the main risk factors for cardiac arrhythmias. It is therefore a key component in computational studies. In this work, we compare the monodomain equation to two eikonal models for cardiac electrophysiology in the presence of fibrosis. We show that discontinuities in the conductivity field, due to the presence of fibrosis, introduce a delay in the activation times. The monodomain equation and eikonal-diffusion model correctly capture these delays, contrarily to the classical eikonal equation. Importantly, a coarse space discretization of the monodomain equation amplifies these delays, even after accounting for numerical error in conduction velocity. The numerical discretization may also introduce artificial conduction blocks and hence increase propagation complexity. Therefore, some care is required when comparing eikonal models to the discretized monodomain equation.
This paper is concerned with the sensitivities of function space oriented interior point approximations in parameter dependent problems. For an abstract setting that covers control constrained optimal control problems, the convergence of interior point sensitivities to the sensitivities of the optimal solution is shown. Error bounds for $L_q$ norms are derived and illustrated with numerical examples.
Ray Tracing Boundary Value Problems: Simulation and SAFT Reconstruction for Ultrasonic Testing
(2016)
Ray Tracing Boundary Value Problems: Simulation and SAFT Reconstruction for Ultrasonic Testing
(2016)
The application of advanced imaging techniques for the ultrasonic inspection of inhomogeneous anisotropic materials like austenitic and dissimilar welds requires information about acoustic wave propagation through the material, in particular travel times between two points in the material. Forward ray tracing is a popular approach to determine traveling paths and arrival times but is ill suited for inverse problems since a large number of rays have to be computed in order to arrive at prescribed end points.
In this contribution we discuss boundary value problems for acoustic rays, where the ray path between two given points is determined by solving the eikonal equation. The implementation of such a two point boundary value ray tracer for sound field simulations through an austenitic weld is described and its efficiency as well as the obtained results are compared to those of a forward ray tracer. The results are validated by comparison with experimental results and commercially available UT simulation tools.
As an application, we discuss an implementation of the method for SAFT (Synthetic Aperture Focusing Technique) reconstruction. The ray tracer calculates the required travel time through the anisotropic columnar grain structure of the austenitic weld. There, the formulation of ray tracing as a boundary value problem allows a straightforward derivation of the ray path from a given transducer position to any pixel in the reconstruction area and reduces the computational cost considerably.
Carbon-fiber reinforced composites are becoming more and more important in the production of light-weight structures, e.g., in the automotive and aerospace industry. Thermography is often used for non-destructive testing of these products, especially to detect delaminations between different layers of the composite.
In this presentation, we aim at methods for defect reconstruction from thermographic measurements of such carbon-fiber reinforced composites. The reconstruction results shall not only allow to locate defects, but also give a quantitative characterization of the defect properties. We discuss the simulation of the measurement process using finite element methods, as well as the experimental validation on flat bottom holes.
Especially in pulse thermography, thin boundary layers with steep temperature gradients occurring at the heated surface need to be resolved. Here we use the combination of a 1D analytical solution combined with numerical solution of the remaining defect equation. We use the simulations to identify material parameters from the measurements.
Finally, fast heuristics for reconstructing defect geometries are applied to the acquired data, and compared for their accuracy and utility in detecting different defects like back surface defects or delaminations.
This paper presents efficient computational techniques for solving an optimization problem in cardiac defibrillation governed by the monodomain equations. Time-dependent electrical currents injected at different spatial positions act as the control. Inexact Newton-CG methods are used, with reduced gradient computation by adjoint solves. In order to reduce the computational complexity, adaptive mesh refinement for state and adjoint equations is performed. To reduce the high storage and bandwidth demand imposed by adjoint gradient and Hessian-vector evaluations, a lossy compression technique for storing trajectory data is applied. An adaptive choice of quantization tolerance based on error estimates is developed in order to ensure convergence. The efficiency of the proposed approach is demonstrated on numerical examples.
This paper presents efficient computational techniques for solving an optimization problem in cardiac defibrillation governed by the monodomain equations. Time-dependent electrical currents injected at different spatial positions act as the control. Inexact Newton-CG methods are used, with reduced gradient computation by adjoint solves. In order to reduce the computational complexity, adaptive mesh refinement for state and adjoint equations is performed. To reduce the high storage and bandwidth demand imposed by adjoint gradient and Hessian-vector evaluations, a lossy compression technique for storing trajectory data is applied. An adaptive choice of quantization tolerance based on error estimates is developed in order to ensure convergence. The efficiency of the proposed approach is demonstrated on numerical examples.
Kaskade 7 is a finite element toolbox for the solution of stationary or transient systems of partial differential equations, aimed at supporting application-oriented research in numerical analysis and scientific computing. The library is written in C++ and is based on the \textsc{Dune} interface. The code is independent of spatial dimension and works with different grid managers. An important feature is the mix-and-match approach to discretizing systems of PDEs with different ansatz and test spaces for all variables.
We describe the mathematical concepts behind the library as well as its structure, illustrating its use at several examples on the way.
Kaskade 7 is a finite element toolbox for the solution of stationary or transient systems of partial differential equations, aimed at supporting application-oriented research in numerical analysis and scientific computing. The library is written in C++ and is based on the Dune interface. The code is independent of spatial dimension and works with different grid managers. An important feature is the mix-and-match approach to discretizing systems of PDEs with different ansatz and test spaces for all variables.
We describe the mathematical concepts behind the library as well as its structure, illustrating its use at several examples on the way.
In high accuracy numerical simulations and optimal control of time-dependent processes, often both many time steps and fine spatial discretizations are needed. Adjoint gradient computation, or post-processing of simulation results, requires the storage of the solution trajectories over the whole time, if necessary together with the adaptively refined spatial grids. In this paper we discuss various techniques to reduce the memory requirements, focusing first on the storage of the solution data, which typically are double precision floating point values. We highlight advantages and disadvantages of the different approaches. Moreover, we present an algorithm for the efficient storage of adaptively refined, hierarchic grids, and the integration with the compressed storage of solution data.
In optimal control problems with nonlinear time-dependent 3D PDEs, the computation of the reduced gradient by adjoint methods requires one solve of the state equation forward in time, and one backward solve of the adjoint equation. Since the state enters into the adjoint equation, the storage of a 4D discretization is necessary. We propose a lossy compression algorithm using a cheap predictor for the state data, with additional entropy coding of prediction errors. Analytical and numerical results indicate that compression factors around 30 can be obtained without exceeding the FE discretization error.
For the solution of optimal control problems governed by nonlinear parabolic PDEs, methods working on the reduced objective functional are often employed to avoid a full spatio-temporal discretization of the problem. The evaluation of the reduced gradient requires one solve of
the state equation forward in time, and one backward solve of the ad-joint equation. The state enters into the adjoint equation, requiring the storage of a full 4D data set. If Newton-CG methods are used, two additional trajectories have to be stored. To get numerical results which are accurate enough, in many case very fine discretizations in time and space are necessary, which leads to a significant amount of data to be stored and transmitted to mass storage. Lossy compression methods were
developed to overcome the storage problem by reducing the accuracy of the stored trajectories. The inexact data induces errors in the reduced gradient and reduced Hessian. In this paper, we analyze the influence of such a lossy trajectory compression method on Newton-CG methods for optimal control of parabolic PDEs and design an adaptive strategy for choosing appropriate quantization tolerances.
For the solution of optimal control problems governed by nonlinear parabolic PDEs, methods working on the reduced objective functional are often employed to avoid a full spatio-temporal discretization of the problem. The evaluation of the reduced gradient requires one solve of
the state equation forward in time, and one backward solve of the ad-joint equation. The state enters into the adjoint equation, requiring the storage of a full 4D data set. If Newton-CG methods are used, two additional trajectories have to be stored. To get numerical results which are accurate enough, in many case very fine discretizations in time and space are necessary, which leads to a significant amount of data to be stored and transmitted to mass storage. Lossy compression methods were
developed to overcome the storage problem by reducing the accuracy of the stored trajectories. The inexact data induces errors in the reduced gradient and reduced Hessian. In this paper, we analyze the influence of such a lossy trajectory compression method on Newton-CG methods for optimal control of parabolic PDEs and design an adaptive strategy for choosing appropriate quantization tolerances.
Solvers for partial differential equations (PDE) are one of the cornerstones of computational science. For large problems, they involve huge amounts of data that needs to be stored and transmitted on all levels of the memory hierarchy. Often, bandwidth is the limiting factor due to relatively small arithmetic intensity, and increasingly so due to the growing disparity between computing power and bandwidth. Consequently, data compression techniques have been investigated and tailored towards the specific requirements of PDE solvers during the last decades. This paper surveys data compression challenges and corresponding solution approaches for PDE problems, covering all levels of the memory hierarchy from mass storage up to main memory. Exemplarily, we illustrate concepts at particular methods, and give references to alternatives.
Solvers for partial differential equations (PDEs) are one of the cornerstones of computational science. For large problems, they involve huge amounts of data that need to be stored and transmitted on all levels of the memory hierarchy. Often, bandwidth is the limiting factor due to the relatively small arithmetic intensity, and increasingly due to the growing disparity between computing power and bandwidth. Consequently, data compression techniques have been investigated and tailored towards the specific requirements of PDE solvers over the recent decades. This paper surveys data compression challenges and discusses examples of corresponding solution approaches for PDE problems, covering all levels of the memory hierarchy from mass storage up to the main memory. We illustrate concepts for particular methods, with examples, and give references to alternatives.
Pulse thermography is a non-destructive testing method based on infrared imaging of transient thermal patterns. Heating the surface of the structure under test for a short period of time generates a non-stationary temperature distribution and thus a thermal contrast between the defect and the sound material. Due to measurement noise, preprocessing of the experimental data is necessary, before reconstruction algorithms can be applied. We propose a decomposition of the measured temperature into Green's function solutions to eliminate noise.
Pulse thermography is a non-destructive testing method based on infrared imaging of transient thermal patterns. Heating the surface of the structure under test for a short period of time generates a non-stationary temperature distribution and thus a thermal contrast between the defect and the sound material. Due to measurement noise, preprocessing of the experimental data is necessary, before reconstruction algorithms can be applied. We propose a decomposition of the measured temperature into Green's function solutions to eliminate noise.
This paper presents concepts and implementation of the finite element toolbox Kaskade 7, a flexible C++ code for solving elliptic and parabolic PDE systems. Issues such as problem formulation, assembly and adaptivity are discussed at the example of optimal control problems. Trajectory compression for parabolic optimization problems is considered as a case study.
This paper presents concepts and implementation of the finite element toolbox Kaskade 7, a flexible C++ code for solving elliptic and parabolic PDE systems. Issues such as problem formulation, assembly and adaptivity are discussed at the example of optimal control problems. Trajectory compression for parabolic optimization problems is considered as a case study.
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
We present a Newton-like method to solve inverse problems and to quantify parameter uncertainties. We apply the method to parameter reconstruction in optical scatterometry, where we take into account a priori information and measurement uncertainties using a Bayesian approach. Further, we discuss the influence of numerical accuracy on the reconstruction result.
The paper presents a particle method framework for resolving molecular dynamics. Error estimators for both the temporal and spatial discretization are advocated and facilitate a fully adaptive propagation. For time integration, the implicit trapezoidal rule is employed, where an explicit predictor enables large time steps. The framework is developed and exemplified in the context of the classical Liouville equation, where Gaussian phase-space packets are used as particles. Simplified variants are discussed shortly, which should prove to be easily implementable in common molecular dynamics codes. A concept is illustrated by numerical examples for one-dimensional dynamics in double well potential.
A Balancing Domain Decomposition by Constraints (BDDC) preconditioner is constructed and analyzed for the solution of hybrid Discontinuous Galerkin discretizations of reaction-diffusion systems of ordinary and partial differential equations arising in cardiac cell-by-cell models. The latter are different from the classical Bidomain and Monodomain cardiac models based on homogenized descriptions of the cardiac tissue at the macroscopic level, and therefore they allow the representation of individual cardiac cells, cell aggregates, damaged tissues and nonuniform distributions of ion channels on the cell membrane. The resulting discrete cell-by-cell models have discontinuous global solutions across the cell boundaries, hence the proposed BDDC preconditioner is based on appropriate dual and primal spaces with additional constraints which transfer information between cells (subdomains) without influencing the overall discontinuity of the global solution. A scalable convergence rate bound is proved for the resulting BDDC cell-by-cell preconditioned operator, while numerical tests validate this bound and investigate its dependence on the discretization parameters.
Multigrid methods for two-body contact problems are mostly
based on special mortar discretizations, nonlinear Gauss-Seidel
solvers, and solution-adapted coarse grid spaces. Their high
computational efficiency comes at the cost of a complex implementation
and a nonsymmetric master-slave discretization of the nonpenetration
condition. Here we investigate an alternative symmetric and
overconstrained segment-to-segment contact formulation that
allows for a simple implementation based on standard multigrid and
a symmetric treatment of contact boundaries, but leads to nonunique
multipliers. For the solution of the arising quadratic programs,
we propose augmented Lagrangian multigrid with overlapping block
Gauss-Seidel smoothers. Approximation and convergence properties are studied numerically at standard test problems.
We consider a shape implant design problem that arises in the context of facial surgery.
We introduce a reformulation as an optimal control problem, where the control acts
as a boundary force. The state is modelled as a minimizer of a polyconvex
hyperelastic energy functional. We show existence of optimal solutions and
derive - on a formal level - first order optimality conditions. Finally, preliminary numerical results
are presented.
We consider a shape implant design problem that arises in the context of facial surgery.
We introduce a reformulation as an optimal control problem, where the control acts
as a boundary force. The state is modelled as a minimizer of a polyconvex
hyperelastic energy functional. We show existence of optimal solutions and
derive - on a formal level - first order optimality conditions. Finally, preliminary numerical results
are presented.
We propose a composite step method, designed for equality constrained optimization with partial differential equations. Focus is laid on the construction of a globalization scheme, which is based on cubic regularization of the objective and an affine covariant damped Newton method for feasibility. We show finite termination of the inner loop and fast local convergence of the algorithm. We discuss preconditioning strategies for the iterative solution of the arising linear systems with projected conjugate gradient. Numerical results are shown for optimal control problems subject to a nonlinear heat equation and subject to nonlinear elastic equations arising from an implant design problem in craniofacial surgery.
We propose a composite step method, designed for equality constrained optimization with partial differential equations. Focus is laid on the construction of a globalization scheme, which is based on cubic regularization of the objective and an affine covariant damped Newton method for feasibility. We show finite termination of the inner loop and fast local convergence of the algorithm. We discuss preconditioning strategies for the iterative solution of the arising linear systems with projected conjugate gradient. Numerical results are shown for optimal control problems subject to a nonlinear heat equation and subject to nonlinear elastic equations arising from an implant design problem in craniofacial surgery.
In an aging society where the number of joint replacements rises, it is important to also increase the longevity of implants.
In particular hip implants have a lifetime of at most 15 years. This derives primarily from
pain due to implant migration, wear, inflammation, and dislocation, which is affected by
the positioning of the implant during the surgery. Current joint replacement practice uses
2D software tools and relies on the experience of surgeons. Especially the 2D tools fail to
take the patients’ natural range of motion as well as stress distribution in the 3D joint
induced by different daily motions into account.
Optimizing the hip joint implant position for all possible parametrized motions under the
constraint of a contact problem is prohibitively expensive as there are too many motions
and every position change demands a recalculation of the contact problem. For the
reduction of the computational effort, we use adaptive refinement on the parameter
domain coupled with the interpolation method of Kriging. A coarse initial grid is to be
locally refined using goal-oriented error estimation, reducing locally high variances. This
approach will be combined with multi-grid optimization such that numerical errors are
reduced.
This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80 % in 10 years.
This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80 % in 10 years
Da kohlenstofffaserverstärkte Kunststoffe (CFK) in anspruchsvollen sicherheitsrelevanten Einsatzgebieten wie im Automobilbau und in der Luftfahrt eingesetzt werden, besteht ein zunehmender Bedarf an zerstörungsfreien Prüfmethoden. Ziel ist die Gewährleistung der Sicherheit und Zuverlässigkeit der eingesetzten Bauteile. Aktive Thermografieverfahren ermöglichen die effiziente Prüfung großer Flächen mit hoher Auflösung in wenigen Arbeitsschritten. Ein wichtiges Teilgebiet der Prüfungen ist die Ortung und Charakterisierung von Delaminationen, die sowohl bereits in der Fertigung als auch während der Nutzung eines Bauteils auftreten können, und dessen strukturelle Integrität schwächen. ;In diesem Beitrag werden CFK-Strukturen mit künstlichen und natürlichen Delaminationen mit Hilfe unterschiedlich zeitlich modulierter Strahlungsquellen experimentell untersucht. Verwendet werden dabei Anregungen mit Blitzlampen und mit frequenzmodulierten Halogenlampen. Mittels Filterfunktionen im Zeit- und Frequenzbereich wird das Kontrast-zu-Rausch-Verhältnis (CNR) der detektierten Fehlstellen optimiert. Verglichen werden anschließend die Nachweisempfindlichkeit, das CNR und die Ortsauflösung der zu charakterisierenden Delaminationen für die unterschiedlichen Anregungs- und Auswertungstechniken. Ergänzt werden die Experimente durch numerische Simulationen des dreidimensionalen Wärmetransportes.
Epidemiological models can not only be used to forecast the course of a pandemic like COVID-19, but also to propose and design non-pharmaceutical interventions such as school and work closing. In general, the design of optimal policies leads to nonlinear optimization problems that can be solved by numerical algorithms. Epidemiological models come in different complexities, ranging from systems of simple ordinary differential equations (ODEs) to complex agent-based models (ABMs). The former allow a fast and straightforward optimization, but are limited in accuracy, detail, and parameterization, while the latter can resolve spreading processes in detail, but are extremely expensive to optimize. We consider policy optimization in a prototypical situation modeled as both ODE and ABM, review numerical optimization approaches, and propose a heterogeneous multilevel approach based on combining a fine-resolution ABM and a coarse ODE model. Numerical experiments, in particular with respect to convergence speed, are given for illustrative examples.
Following axon pathfinding, growth cones transition from stochastic filopodial exploration to the formation of a limited number of synapses. How the interplay of filopodia and synapse assembly ensures robust connectivity in the brain has remained a challenging problem. Here, we developed a new 4D analysis method for filopodial dynamics and a data-driven computational model of synapse formation for R7 photoreceptor axons in developing Drosophila brains. Our live data support a 'serial synapse formation' model, where at any time point only a single 'synaptogenic' filopodium suppresses the synaptic competence of other filopodia through competition for synaptic seeding factors. Loss of the synaptic seeding factors Syd-1 and Liprin-α leads to a loss of this suppression, filopodial destabilization and reduced synapse formation, which is sufficient to cause the destabilization of entire axon terminals. Our model provides a filopodial 'winner-takes-all' mechanism that ensures the formation of an appropriate number of synapses.
The C++ standard template library has many useful containers for data. The standard library includes two adpators, queue, and stack. The authors have extended this model along the lines of relational database semantics. Sometimes the analogy is striking, and we will point it out occasionally. An adaptor allows the standard algorithms to be used on a subset or modification of the data without having to copy the data elements into a new container. The authors provide many useful adaptors which can be used together to produce interesting views of data in a container.
Container Adaptors
(1999)
The C++ standard template library has many useful containers for data. The standard library includes two adpators, queue, and stack. The authors have extended this model along the lines of relational database semantics. Sometimes the analogy is striking, and we will point it out occasionally. An adaptor allows the standard algorithms to be used on a subset or modification of the data without having to copy the data elements into a new container. The authors provide many useful adaptors which can be used together to produce interesting views of data in a container.
The paper addresses primal interior point method for state constrained PDE optimal control problems. By a Lavrentiev regularization, the state constraint is transformed to a mixed control-state constraint with bounded Lagrange multiplier. Existence and convergence of the central path are established, and linear convergence of a short-step pathfollowing method is shown. The behaviour of the regularizations are demonstrated by numerical examples.
Statistical methods to design computer experiments usually rely on a Gaussian process (GP) surrogate model, and typically aim at selecting design points (combinations of algorithmic and model parameters) that minimize the average prediction variance, or maximize the prediction accuracy for the hyperparameters of the GP surrogate.
In many applications, experiments have a tunable precision, in the sense that one software parameter controls the tradeoff between accuracy and computing time (e.g., mesh size in FEM simulations or number of Monte-Carlo samples).
We formulate the problem of allocating a budget of computing time over a finite set of candidate points for the goals mentioned above. This is a continuous optimization problem, which is moreover convex whenever the tradeoff function accuracy vs. computing time is concave.
On the other hand, using non-concave weight functions can help to identify sparse designs. In addition, using sparse kernel approximations drastically reduce the cost per iteration of the multiplicative weights updates that can be used to solve this problem.
Fast nonlinear programming methods following the all-at-once approach usually employ Newton's method for solving linearized Karush-Kuhn-Tucker (KKT) systems. In nonconvex problems, the Newton direction is only guaranteed to be a descent direction if the Hessian of the Lagrange function is positive definite on the nullspace of the active constraints, otherwise some modifications to Newton's method are necessary. This condition can be verified using the signs of the KKT's eigenvalues (inertia), which are usually available from direct solvers for the arising linear saddle point problems. Iterative solvers are mandatory for very large-scale problems, but in general do not provide the inertia. Here we present a preconditioner based on a multilevel incomplete $LBL^T$ factorization, from which an approximation of the inertia can be obtained. The suitability of the heuristics for application in optimization methods is verified on an interior point method applied to the CUTE and COPS test problems, on large-scale 3D PDE-constrained optimal control problems, as well as 3D PDE-constrained optimization in biomedical cancer hyperthermia treatment planning. The efficiency of the preconditioner is demonstrated on convex and nonconvex problems with $150^3$ state variables and $150^2$ control variables, both subject to bound constraints.
Temperature based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex, but allow a higher accuracy of death time estimation as in principle all relevant cooling mechanisms can be taken into account. Here, a complete work flow for finite element based cooling simulation models is presented.
The following steps are demonstrated on CT-phantoms:
• CT-scan
• Segmentation of the CT images for thermodynamically relevant features of individual
geometries
• Conversion of the segmentation result into a Finite Element (FE) simulation model
• Computation of the model cooling curve
• Calculation of the cooling time
For the first time in FE-based cooling time estimation the steps from the CT image over segmentation to FE model generation are semi-automatically performed. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using different CTphantoms. Some of the CT phantoms thermodynamic material parameters had to be experimentally determined via independent experiments. Moreover the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.
We consider a linear iterative solver for large scale linearly constrained quadratic minimization problems that arise, for example, in optimization with PDEs. By a primal-dual projection (PDP) iteration, which can be interpreted and analysed as a gradient method on a quotient space, the given problem can be solved by computing sulutions for a sequence of constrained surrogate problems, projections onto the feasible subspaces, and Lagrange multiplier updates. As a major application we consider a class of optimization problems with PDEs, where PDP can be applied together with a projected cg method using a block triangular constraint preconditioner. Numerical experiments show reliable and competitive performance for an optimal control problem in elasticity.
A thorough convergence analysis of the Control Reduced Interior Point Method in function space is performed. This recently proposed method is a primal interior point pathfollowing scheme with the special feature, that the control variable is eliminated from the optimality system. Apart from global linear convergence we show, that this method converges locally almost quadratically, if the optimal solution satisfies a function space analogue to a non-degeneracy condition. In numerical experiments we observe, that a prototype implementation of our method behaves in compliance with our theoretical results.
We consider an optimal control problem from hyperthermia treatment planning and its barrier regularization. We derive basic results, which lay the groundwork for the computation of optimal solutions via an interior point path-following method. Further, we report on a numerical implementation of such a method and its performance at an example problem.
We consider an optimal control problem from hyperthermia treatment planning and its barrier regularization. We derive basic results, which lay the groundwork for the computation of optimal solutions via an interior point path-following method. Further, we report on a numerical implementation of such a method and its performance at an example problem.
The growing discrepancy between CPU computing power and memory bandwidth drives more and more numerical algorithms into a bandwidth-
bound regime. One example is the overlapping Schwarz smoother, a highly effective building block for iterative multigrid solution of elliptic equations with higher order finite elements. Two options of reducing the required
memory bandwidth are sparsity exploiting storage layouts and representing matrix entries with reduced precision in floating point or fixed point
format. We investigate the impact of several options on storage demand and contraction rate, both analytically in the context of subspace correction methods and numerically at an example of solid mechanics. Both perspectives agree on the favourite scheme: fixed point representation of Cholesky factors in nested dissection storage.
The growing discrepancy between CPU computing power and memory bandwidth drives more and more numerical algorithms into a bandwidth-bound regime. One example is the overlapping Schwarz smoother, a highly effective building block for iterative multigrid solution of elliptic equations with higher order finite elements. Two options of reducing the required memory bandwidth are sparsity exploiting storage layouts and representing matrix entries with reduced precision in floating point or fixed point format. We investigate the impact of several options on storage demand and contraction rate, both analytically in the context of subspace correction methods and numerically at an example of solid mechanics. Both perspectives agree on the favourite scheme: fixed point representation of Cholesky factors in nested dissection storage.
Adaptive Gaussian Process Regression for Efficient Building of Surrogate Models in Inverse Problems
(2023)
In a task where many similar inverse problems must be solved, evaluating costly simulations is impractical. Therefore, replacing the model y with a surrogate model y(s) that can be evaluated quickly leads to a significant speedup. The approximation quality of the surrogate model depends strongly on the number, position, and accuracy of the sample points. With an additional finite computational budget, this leads to a problem of (computer) experimental design. In contrast to the selection of sample points, the trade-off between accuracy and effort has hardly been studied systematically. We therefore propose an adaptive algorithm to find an optimal design in terms of
position and accuracy. Pursuing a sequential design by incrementally appending the computational budget leads to a convex and constrained optimization problem. As a surrogate, we construct a Gaussian process regression model. We measure the global approximation error in terms of its impact on the accuracy of the identified parameter and aim for a uniform absolute tolerance, assuming that y(s) is computed by finite element calculations. A priori error estimates and a coarse estimate of computational effort relate the expected improvement of the surrogate model error to computational effort, resulting in the most efficient combination of sample point and evaluation tolerance. We also allow for improving the accuracy of already existing sample points by continuing previously truncated finite element solution procedures.
Conduction velocity in cardiac tissue is a crucial electrophysiological parameter for arrhythmia vulnerability. Pathologically reduced conduction velocity facilitates arrhythmogenesis because such conduction velocities decrease the wavelength with which re-entry may occur. Computational studies on CV and how it changes regionally in models at spatial scales multiple times larger than actual cardiac cells exist. However, microscopic conduction within cells and between them have been studied less in simulations. In this work, we study the relation of microscopic conduction patterns and clinically observable macroscopic conduction using an extracellular-membrane-intracellular model which represents cardiac tissue with these subdomains at subcellular resolution. By considering cell arrangement and non-uniform gap junction distribution, it yields anisotropic excitation propagation. This novel kind of model can for example be used to understand how discontinuous conduction on the microscopic level affects fractionation of electrograms in healthy and fibrotic tissue. Along the membrane of a cell, we observed a continuously propagating activation wavefront. When transitioning from one cell to the neighbouring one, jumps in local activation times occurred, which led to lower global conduction velocities than locally within each cell.