Refine
Year of publication
Document Type
- Article (55)
- ZIB-Report (42)
- In Proceedings (11)
- In Collection (9)
- Book (4)
- Book chapter (3)
- Report (1)
Is part of the Bibliography
- no (125) (remove)
Keywords
- optimal control (14)
- interior point methods in function space (5)
- trajectory storage (4)
- discretization error (3)
- finite elements (3)
- lossy compression (3)
- Newton-CG (2)
- complementarity functions (2)
- compression (2)
- finite element method (2)
Institute
- Numerical Mathematics (125) (remove)
The C++ standard template library has many useful containers for data. The standard library includes two adpators, queue, and stack. The authors have extended this model along the lines of relational database semantics. Sometimes the analogy is striking, and we will point it out occasionally. An adaptor allows the standard algorithms to be used on a subset or modification of the data without having to copy the data elements into a new container. The authors provide many useful adaptors which can be used together to produce interesting views of data in a container.
Statistical methods to design computer experiments usually rely on a Gaussian process (GP) surrogate model, and typically aim at selecting design points (combinations of algorithmic and model parameters) that minimize the average prediction variance, or maximize the prediction accuracy for the hyperparameters of the GP surrogate.
In many applications, experiments have a tunable precision, in the sense that one software parameter controls the tradeoff between accuracy and computing time (e.g., mesh size in FEM simulations or number of Monte-Carlo samples).
We formulate the problem of allocating a budget of computing time over a finite set of candidate points for the goals mentioned above. This is a continuous optimization problem, which is moreover convex whenever the tradeoff function accuracy vs. computing time is concave.
On the other hand, using non-concave weight functions can help to identify sparse designs. In addition, using sparse kernel approximations drastically reduce the cost per iteration of the multiplicative weights updates that can be used to solve this problem.
Temperature-based estimation of time of death (ToD) can be per-
formed either with the help of simple phenomenological models of corpse
cooling or with detailed mechanistic (thermodynamic) heat transfer mod-
els. The latter are much more complex, but allow a higher accuracy of
ToD estimation as in principle all relevant cooling mechanisms can be
taken into account.
The potentially higher accuracy depends on the accuracy of tissue and
environmental parameters as well as on the geometric resolution. We in-
vestigate the impact of parameter variations and geometry representation
on the estimated ToD based on a highly detailed 3D corpse model, that
has been segmented and geometrically reconstructed from a computed to-
mography (CT) data set, differentiating various organs and tissue types.
From that we identify the most crucial parameters to measure or estimate,
and obtain a local uncertainty quantifcation for the ToD.
Temperature-based estimation of time of death (ToD) can be per-
formed either with the help of simple phenomenological models of corpse
cooling or with detailed mechanistic (thermodynamic) heat transfer mod-
els. The latter are much more complex, but allow a higher accuracy of
ToD estimation as in principle all relevant cooling mechanisms can be
taken into account.
The potentially higher accuracy depends on the accuracy of tissue and
environmental parameters as well as on the geometric resolution. We in-
vestigate the impact of parameter variations and geometry representation
on the estimated ToD based on a highly detailed 3D corpse model, that
has been segmented and geometrically reconstructed from a computed to-
mography (CT) data set, differentiating various organs and tissue types.
Da kohlenstofffaserverstärkte Kunststoffe (CFK) in anspruchsvollen sicherheitsrelevanten Einsatzgebieten wie im Automobilbau und in der Luftfahrt eingesetzt werden, besteht ein zunehmender Bedarf an zerstörungsfreien Prüfmethoden. Ziel ist die Gewährleistung der Sicherheit und Zuverlässigkeit der eingesetzten Bauteile. Aktive Thermografieverfahren ermöglichen die effiziente Prüfung großer Flächen mit hoher Auflösung in wenigen Arbeitsschritten. Ein wichtiges Teilgebiet der Prüfungen ist die Ortung und Charakterisierung von Delaminationen, die sowohl bereits in der Fertigung als auch während der Nutzung eines Bauteils auftreten können, und dessen strukturelle Integrität schwächen. ;In diesem Beitrag werden CFK-Strukturen mit künstlichen und natürlichen Delaminationen mit Hilfe unterschiedlich zeitlich modulierter Strahlungsquellen experimentell untersucht. Verwendet werden dabei Anregungen mit Blitzlampen und mit frequenzmodulierten Halogenlampen. Mittels Filterfunktionen im Zeit- und Frequenzbereich wird das Kontrast-zu-Rausch-Verhältnis (CNR) der detektierten Fehlstellen optimiert. Verglichen werden anschließend die Nachweisempfindlichkeit, das CNR und die Ortsauflösung der zu charakterisierenden Delaminationen für die unterschiedlichen Anregungs- und Auswertungstechniken. Ergänzt werden die Experimente durch numerische Simulationen des dreidimensionalen Wärmetransportes.
In several inital value problems with particularly expensive right hand side evaluation or implicit step computation, there is a trade-off between accuracy and computational effort. We consider inexact spectral deferred correction (SDC) methods for solving such initial value problems. SDC methods are interpreted as fixed point iterations and, due to their corrective iterative nature, allow to exploit the accuracy-work-tradeoff for a reduction of the total computational effort. On one hand we derive error models bounding the total error in terms of the evaluation errors. On the other hand, we define work models describing the computational effort in terms of the evaluation accuracy. Combining both, a theoretically optimal local tolerance selection is worked out by minimizing the total work subject to achieving the requested tolerance. The properties of optimal local tolerances and the predicted efficiency gain compared to simpler heuristics, and a reasonable practical performance, are illustrated on simple numerical examples.
In several inital value problems with particularly expensive right hand side computation, there is a trade-off between accuracy and computational effort in evaluating the right hand sides. We consider inexact spectral deferred correction (SDC) methods for solving such non-stiff initial value problems. SDC methods are interpreted as fixed point iterations and, due to their corrective iterative nature, allow to exploit the accuracy-work-tradeoff for a reduction of the total computational effort. On one hand we derive an error model bounding the total error in terms of the right hand side evaluation errors. On the other hand, we define work models describing the computational effort in terms of the evaluation accuracy. Combining both, a theoretically optimal tolerance selection is worked out by minimizing the total work subject to achieving the requested tolerance.
The paper addresses primal interior point method for state constrained PDE optimal control problems. By a Lavrentiev regularization, the state constraint is transformed to a mixed control-state constraint with bounded Lagrange multiplier. Existence and convergence of the central path are established, and linear convergence of a short-step pathfollowing method is shown. The behaviour of the regularizations are demonstrated by numerical examples.
A thorough convergence analysis of the Control Reduced Interior Point Method in function space is performed. This recently proposed method is a primal interior point pathfollowing scheme with the special feature, that the control variable is eliminated from the optimality system. Apart from global linear convergence we show, that this method converges locally almost quadratically, if the optimal solution satisfies a function space analogue to a non-degeneracy condition. In numerical experiments we observe, that a prototype implementation of our method behaves in compliance with our theoretical results.
In optimal control problems with nonlinear time-dependent 3D PDEs, the computation of the reduced gradient by adjoint methods requires one solve of the state equation forward in time, and one backward solve of the adjoint equation. Since the state enters into the adjoint equation, the storage of a 4D discretization is necessary. We propose a lossy compression algorithm using a cheap predictor for the state data, with additional entropy coding of prediction errors. Analytical and numerical results indicate that compression factors around 30 can be obtained without exceeding the FE discretization error.
In optimal control problems with nonlinear time-dependent 3D PDEs, full 4D discretizations are usually prohibitive due to the storage requirement. For this reason gradient and quasi-Newton methods working on the reduced functional are often employed. The computation of the reduced gradient requires one solve of the state equation forward in time, and one backward solve of the adjoint equation. The state enters into the adjoint equation, again requiring the storage of a full 4D data set. We propose a lossy compression algorithm using an inexact but cheap predictor for the state data, with additional entropy coding of prediction errors. As the data is used inside a discretized, iterative algorithm, lossy coding maintaining an error bound is sufficient.
In optimal control problems with nonlinear time-dependent 3D PDEs, full 4D discretizations are usually prohibitive due to the storage requirement. For this reason gradient and quasi-Newton methods working on the reduced functional are often employed. The computation of the reduced gradient requires one solve of the state equation forward in time, and one backward solve of the adjoint equation. The state enters into the adjoint equation, again requiring the storage of a full 4D data set. We propose a lossy compression algorithm using an inexact but cheap predictor for the state data, with additional entropy coding of prediction errors. As the data is used inside a discretized, iterative algorithm, lossy coding maintaining an error bound is sufficient.
Spectral Deferred Correction methods for adaptive electro-mechanical coupling in cardiac simulation
(2014)
We investigate spectral deferred correction (SDC) methods for time stepping
and their interplay with spatio-temporal adaptivity, applied to the solution
of the cardiac electro-mechanical coupling model. This model consists
of the Monodomain equations, a reaction-diffusion system modeling the cardiac
bioelectrical activity, coupled with a quasi-static mechanical model describing
the contraction and relaxation of the cardiac muscle. The numerical
approximation of the cardiac electro-mechanical coupling is a challenging
multiphysics problem, because it exhibits very different spatial and temporal
scales. Therefore, spatio-temporal adaptivity is a promising approach
to reduce the computational complexity. SDC methods are simple iterative
methods for solving collocation systems. We exploit their flexibility for combining
them in various ways with spatio-temporal adaptivity. The accuracy
and computational complexity of the resulting methods are studied on some
numerical examples.
Spectral Deferred Correction methods for adaptive electro-mechanical coupling in cardiac simulation
(2017)
We investigate spectral deferred correction (SDC) methods for time stepping
and their interplay with spatio-temporal adaptivity, applied to the solution
of the cardiac electro-mechanical coupling model. This model consists
of the Monodomain equations, a reaction-diffusion system modeling the cardiac
bioelectrical activity, coupled with a quasi-static mechanical model describing
the contraction and relaxation of the cardiac muscle. The numerical
approximation of the cardiac electro-mechanical coupling is a challenging
multiphysics problem, because it exhibits very different spatial and temporal
scales. Therefore, spatio-temporal adaptivity is a promising approach
to reduce the computational complexity. SDC methods are simple iterative
methods for solving collocation systems. We exploit their flexibility for combining
them in various ways with spatio-temporal adaptivity. The accuracy
and computational complexity of the resulting methods are studied on some
numerical examples.
This paper presents concepts and implementation of the finite element toolbox Kaskade 7, a flexible C++ code for solving elliptic and parabolic PDE systems. Issues such as problem formulation, assembly and adaptivity are discussed at the example of optimal control problems. Trajectory compression for parabolic optimization problems is considered as a case study.
This paper presents concepts and implementation of the finite element toolbox Kaskade 7, a flexible C++ code for solving elliptic and parabolic PDE systems. Issues such as problem formulation, assembly and adaptivity are discussed at the example of optimal control problems. Trajectory compression for parabolic optimization problems is considered as a case study.
Following axon pathfinding, growth cones transition from stochastic filopodial exploration to the formation of a limited number of synapses. How the interplay of filopodia and synapse assembly ensures robust connectivity in the brain has remained a challenging problem. Here, we developed a new 4D analysis method for filopodial dynamics and a data-driven computational model of synapse formation for R7 photoreceptor axons in developing Drosophila brains. Our live data support a 'serial synapse formation' model, where at any time point only a single 'synaptogenic' filopodium suppresses the synaptic competence of other filopodia through competition for synaptic seeding factors. Loss of the synaptic seeding factors Syd-1 and Liprin-α leads to a loss of this suppression, filopodial destabilization and reduced synapse formation, which is sufficient to cause the destabilization of entire axon terminals. Our model provides a filopodial 'winner-takes-all' mechanism that ensures the formation of an appropriate number of synapses.
Following axon pathfinding, growth cones transition from stochastic filopodial exploration to the formation of a limited number of synapses. How the interplay of filopodia and synapse assembly ensures robust connectivity in the brain has remained a challenging problem. Here, we developed a new 4D analysis method for filopodial dynamics and a data-driven computational model of synapse formation for R7 photoreceptor axons in developing Drosophila brains. Our live data support a 'serial synapse formation' model, where at any time point only a single 'synaptogenic' filopodium suppresses the synaptic competence of other filopodia through competition for synaptic seeding factors. Loss of the synaptic seeding factors Syd-1 and Liprin-α leads to a loss of this suppression, filopodial destabilization and reduced synapse formation, which is sufficient to cause the destabilization of entire axon terminals. Our model provides a filopodial 'winner-takes-all' mechanism that ensures the formation of an appropriate number of synapses.
In high accuracy numerical simulations and optimal control of time-dependent processes, often both many time steps and fine spatial discretizations are needed. Adjoint gradient computation, or post-processing of simulation results, requires the storage of the solution trajectories over the whole time, if necessary together with the adaptively refined spatial grids. In this paper we discuss various techniques to reduce the memory requirements, focusing first on the storage of the solution data, which typically are double precision floating point values. We highlight advantages and disadvantages of the different approaches. Moreover, we present an algorithm for the efficient storage of adaptively refined, hierarchic grids, and the integration with the compressed storage of solution data.
Ray Tracing Boundary Value Problems: Simulation and SAFT Reconstruction for Ultrasonic Testing
(2016)
Ray Tracing Boundary Value Problems: Simulation and SAFT Reconstruction for Ultrasonic Testing
(2016)
The application of advanced imaging techniques for the ultrasonic inspection of inhomogeneous anisotropic materials like austenitic and dissimilar welds requires information about acoustic wave propagation through the material, in particular travel times between two points in the material. Forward ray tracing is a popular approach to determine traveling paths and arrival times but is ill suited for inverse problems since a large number of rays have to be computed in order to arrive at prescribed end points.
In this contribution we discuss boundary value problems for acoustic rays, where the ray path between two given points is determined by solving the eikonal equation. The implementation of such a two point boundary value ray tracer for sound field simulations through an austenitic weld is described and its efficiency as well as the obtained results are compared to those of a forward ray tracer. The results are validated by comparison with experimental results and commercially available UT simulation tools.
As an application, we discuss an implementation of the method for SAFT (Synthetic Aperture Focusing Technique) reconstruction. The ray tracer calculates the required travel time through the anisotropic columnar grain structure of the austenitic weld. There, the formulation of ray tracing as a boundary value problem allows a straightforward derivation of the ray path from a given transducer position to any pixel in the reconstruction area and reduces the computational cost considerably.
Carbon-fiber reinforced composites are becoming more and more important in the production of light-weight structures, e.g., in the automotive and aerospace industry. Thermography is often used for non-destructive testing of these products, especially to detect delaminations between different layers of the composite.
In this presentation, we aim at methods for defect reconstruction from thermographic measurements of such carbon-fiber reinforced composites. The reconstruction results shall not only allow to locate defects, but also give a quantitative characterization of the defect properties. We discuss the simulation of the measurement process using finite element methods, as well as the experimental validation on flat bottom holes.
Especially in pulse thermography, thin boundary layers with steep temperature gradients occurring at the heated surface need to be resolved. Here we use the combination of a 1D analytical solution combined with numerical solution of the remaining defect equation. We use the simulations to identify material parameters from the measurements.
Finally, fast heuristics for reconstructing defect geometries are applied to the acquired data, and compared for their accuracy and utility in detecting different defects like back surface defects or delaminations.
We present a Newton-like method to solve inverse problems and to quantify parameter uncertainties. We apply the method to parameter reconstruction in optical scatterometry, where we take into account a priori information and measurement uncertainties using a Bayesian approach. Further, we discuss the influence of numerical accuracy on the reconstruction result.
Parabolic reaction-diffusion systems may develop sharp moving reaction fronts which pose a challenge even for adaptive finite element methods. We propose a method to transform the equation into an equivalent form that usually exhibits solutions which are easier to discretize, giving higher accuracy for a given number of degrees of freedom. The transformation is realized as an efficiently computable pointwise nonlinear scaling that is optimized for prototypical planar travelling wave solutions of the underlying reaction-diffusion equation. The gain in either performance or accuracy is demonstrated on different numerical examples.
Optimization of clinical radiofrequency hyperthermia by use of MR-thermography in a hybrid system
(2010)
Regional hyperthermia is a cancer therapy aiming at heating tumors using phased array applicators. This article provides an overview over current mathematical challenges of delivering individually optimal treatments. The focus is on therapy planning and identification of technical as well as physiological quantities from MR thermometry measurements.
Estimation of time of death based on a single measurement of body
core temperature is a standard procedure in forensic medicine.
Mechanistic models using simulation of heat transport promise
higher accuracy than established phenomenological models in
particular in nonstandard situations, but involve many not exactly
known physical parameters. Identifying both time of death and
physical parameters from multiple temperature measurements is
one possibility to reduce the uncertainty significantly.
In this paper, we consider the inverse problem in a Bayesian setting
and perform both local and sampling-based uncertainty
quantification, where proper orthogonal decomposition is used as
model reduction for fast solution of the forward model. Based on
the local uncertainty quantification, optimal design of experiments
is performed in order to minimize the uncertainty in the time of
death estimate for a given number of measurements. For reasons
of practicability, temperature acquisition points are selected from
a set of candidates in different spatial and temporal locations.
Applied to a real corpse model, a significant accuracy improvement
is obtained already with a small number of measurements.
Estimation of time of death based on a single measurement of body
core temperature is a standard procedure in forensic medicine.
Mechanistic models using simulation of heat transport promise
higher accuracy than established phenomenological models in
particular in nonstandard situations, but involve many not exactly
known physical parameters. Identifying both time of death and
physical parameters from multiple temperature measurements is
one possibility to reduce the uncertainty significantly.
In this paper, we consider the inverse problem in a Bayesian setting
and perform both local and sampling-based uncertainty
quantification, where proper orthogonal decomposition is used as
model reduction for fast solution of the forward model. Based on
the local uncertainty quantification, optimal design of experiments
is performed in order to minimize the uncertainty in the time of
death estimate for a given number of measurements. For reasons
of practicability, temperature acquisition points are selected from
a set of candidates in different spatial and temporal locations.
Applied to a real corpse model, a significant accuracy improvement
is obtained already with a small number of measurements.
This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80 % in 10 years.
This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80 % in 10 years
On the Accuracy of Eikonal Approximations in Cardiac Electrophysiology in the Presence of Fibrosis
(2023)
Fibrotic tissue is one of the main risk factors for cardiac arrhythmias. It is therefore a key component in computational studies. In this work, we compare the monodomain equation to two eikonal models for cardiac electrophysiology in the presence of fibrosis. We show that discontinuities in the conductivity field, due to the presence of fibrosis, introduce a delay in the activation times. The monodomain equation and eikonal-diffusion model correctly capture these delays, contrarily to the classical eikonal equation. Importantly, a coarse space discretization of the monodomain equation amplifies these delays, even after accounting for numerical error in conduction velocity. The numerical discretization may also introduce artificial conduction blocks and hence increase propagation complexity. Therefore, some care is required when comparing eikonal models to the discretized monodomain equation.
The paper proposes goal-oriented error estimation and mesh refinement for optimal control problems with elliptic PDE constraints using the value of the reduced cost functional as quantity of interest. Error representation, hierarchical error estimators, and greedy-style error indicators are derived and compared to their counterparts when using the all-at-once cost functional as quantity of interest. Finally, the efficiency of the error estimator and generated meshes are demonstrated on numerical examples.
The paper proposes goal-oriented error estimation and mesh refinement for optimal control problems with elliptic PDE constraints using the value of the reduced cost functional as quantity of interest. Error representation, hierarchical error estimators, and greedy-style error indicators are derived and compared to their counterparts when using the all-at-once cost functional as quantity of interest. Finally, the efficiency of the error estimator and generated meshes are demonstrated on numerical examples.
Reasons for the failure of adaptive methods to deliver improved efficiency when integrating monodomain models for myocardiac excitation are discussed. Two closely related techniques for reducing the computational complexity of linearly implicit integrators, deliberate sparsing and splitting, are investigated with respect to their impact on computing time and accuracy.
Reasons for the failure of adaptive methods to deliver improved efficiency when integrating monodomain models for myocardiac excitation are discussed. Two closely related techniques for reducing the computational complexity of linearly implicit integrators, deliberate sparsing and splitting, are investigated with respect to their impact on computing time and accuracy.
Numerische Mathematik 3
(2011)
Background
Mathematical optimisation models have recently been applied to identify ideal Automatic External Defibrillator (AED) locations that maximise coverage of Out of Hospital Cardiac Arrest (OHCA). However, these fixed location models cannot relocate existing AEDs in a flexible way, and have nearly exclusively been applied to urban regions. We developed a flexible location model for AEDs, compared its performance to existing fixed location and population models, and explored how these perform across urban and rural regions.
Methods
Optimisation techniques were applied to AED deployment and OHCA coverage was assessed. A total of 2802 geolocated OHCAs occurred in Canton Ticino, Switzerland, from January 1st 2005 to December 31st 2015.
Results
There were 719 AEDs in Canton Ticino. 635 (23%) OHCA events occurred within 100m of an AED, with 306 (31%) in urban, and 329 (18%) in rural areas. Median distance from OHCA events to the nearest AED was 224m (168m urban vs. 269m rural). Flexible location models performed better than fixed location and population models, with the cost to deploy 20 new AEDs instead relocating 171 existing AEDs to new locations, improving OHCA coverage to 38%, compared to 26% using fixed models, and 24% with the population based model.
Conclusions
Optimisation models for AEDs placement are superior to population models and should be strongly considered by communities when selecting areas for AED deployment. Compared to other models, flexible location models increase overall OHCA coverage, and decreases the distance to nearby AEDs, even in rural areas, while saving significant financial resources.
Epidemiological models can not only be used to forecast the course of a pandemic like COVID-19, but also to propose and design non-pharmaceutical interventions such as school and work closing. In general, the design of optimal policies leads to nonlinear optimization problems that can be solved by numerical algorithms. Epidemiological models come in different complexities, ranging from systems of simple ordinary differential equations (ODEs) to complex agent-based models (ABMs). The former allow a fast and straightforward optimization, but are limited in accuracy, detail, and parameterization, while the latter can resolve spreading processes in detail, but are extremely expensive to optimize. We consider policy optimization in a prototypical situation modeled as both ODE and ABM, review numerical optimization approaches, and propose a heterogeneous multilevel approach based on combining a fine-resolution ABM and a coarse ODE model. Numerical experiments, in particular with respect to convergence speed, are given for illustrative examples.
Multigrid methods for two-body contact problems are mostly
based on special mortar discretizations, nonlinear Gauss-Seidel
solvers, and solution-adapted coarse grid spaces. Their high
computational efficiency comes at the cost of a complex implementation
and a nonsymmetric master-slave discretization of the nonpenetration
condition. Here we investigate an alternative symmetric and
overconstrained segment-to-segment contact formulation that
allows for a simple implementation based on standard multigrid and
a symmetric treatment of contact boundaries, but leads to nonunique
multipliers. For the solution of the arising quadratic programs,
we propose augmented Lagrangian multigrid with overlapping block
Gauss-Seidel smoothers. Approximation and convergence properties are studied numerically at standard test problems.
This paper surveys the required mathematics for a typical challenging problem from computational medicine, the cancer therapy planning in deep regional hyperthermia. In the course of many years of close cooperation with clinics, the medical problem gave rise to quite a number of subtle mathematical problems, part of which had been unsolved when the common project started. Efficiency of numerical algorithms, i.e. computational speed and monitored reliability, play a decisive role for the medical treatment. Off-the-shelf software had turned out to be not sufficient to meet the requirements of medicine. Rather, new mathematical theory as well as new numerical algorithms had to be developed. In order to make our algorithms useful in the clinical environment, new visualization software, a virtual lab, including 3D geometry processing of individual virtual patients had to be designed and implemented. Moreover, before the problems could be attacked by numerical algorithms, careful mathematical modelling had to be done. Finally, parameter identification and constrained optimization for the PDEs had to be newly analyzed and realized over the individual patient's geometry. Our new techniques had an impact on the specificity of the individual patients' treatment and on the construction of an improved hyperthermia applicator.
This paper surveys the required mathematics for a typical challenging problem from computational medicine, the cancer therapy planning in deep regional hyperthermia. In the course of many years of close cooperation with clinics, the medical problem gave rise to quite a number of subtle mathematical problems, part of which had been unsolved when the common project started. Efficiency of numerical algorithms, i.e. computational speed and monitored reliability, play a decisive role for the medical treatment. Off-the-shelf software had turned out to be not sufficient to meet the requirements of medicine. Rather, new mathematical theory as well as new numerical algorithms had to be developed. In order to make our algorithms useful in the clinical environment, new visualization software, a virtual lab, including 3D geometry processing of individual virtual patients had to be designed and implemented. Moreover, before the problems could be attacked by numerical algorithms, careful mathematical modelling had to be done. Finally, parameter identification and constrained optimization for the PDEs had to be newly analyzed and realized over the individual patient's geometry. Our new techniques had an impact on the specificity of the individual patients' treatment and on the construction of an improved hyperthermia applicator.
Parallel in time methods for solving initial value problems are a means to increase the parallelism of numerical simulations. Hybrid parareal schemes interleaving the parallel in time iteration with an iterative solution of the individual time steps are among the most efficient methods for general nonlinear problems. Despite the hiding of communication time behind computation, communication has in certain situations a significant impact on the total runtime. Here we present strict, yet no sharp, error bounds for hybrid parareal methods with inexact communication due to lossy data compression, and derive theoretical estimates of the impact of compression on parallel efficiency of the algorithms. These and some computational experiments suggest that compression is a viable method to make hybrid parareal schemes robust with respect to low bandwidth setups.
Parallel in time methods for solving initial value problems are a means to increase the parallelism of numerical simulations. Hybrid parareal schemes interleaving the parallel in time iteration with an iterative solution of the individual time steps are among the most efficient methods for general nonlinear problems. Despite the hiding of communication time behind computation, communication has in certain situations a significant impact on the total runtime. Here we present strict, yet no sharp, error bounds for hybrid parareal methods with inexact communication due to lossy data compression, and derive theoretical estimates of the impact of compression on parallel efficiency of the algorithms. These and some computational experiments suggest that compression is a viable method to make hybrid parareal schemes robust with respect to low bandwidth setups.
This paper presents efficient computational techniques for solving an optimization problem in cardiac defibrillation governed by the monodomain equations. Time-dependent electrical currents injected at different spatial positions act as the control. Inexact Newton-CG methods are used, with reduced gradient computation by adjoint solves. In order to reduce the computational complexity, adaptive mesh refinement for state and adjoint equations is performed. To reduce the high storage and bandwidth demand imposed by adjoint gradient and Hessian-vector evaluations, a lossy compression technique for storing trajectory data is applied. An adaptive choice of quantization tolerance based on error estimates is developed in order to ensure convergence. The efficiency of the proposed approach is demonstrated on numerical examples.
This paper presents efficient computational techniques for solving an optimization problem in cardiac defibrillation governed by the monodomain equations. Time-dependent electrical currents injected at different spatial positions act as the control. Inexact Newton-CG methods are used, with reduced gradient computation by adjoint solves. In order to reduce the computational complexity, adaptive mesh refinement for state and adjoint equations is performed. To reduce the high storage and bandwidth demand imposed by adjoint gradient and Hessian-vector evaluations, a lossy compression technique for storing trajectory data is applied. An adaptive choice of quantization tolerance based on error estimates is developed in order to ensure convergence. The efficiency of the proposed approach is demonstrated on numerical examples.
For the solution of optimal control problems governed by nonlinear parabolic PDEs, methods working on the reduced objective functional are often employed to avoid a full spatio-temporal discretization of the problem. The evaluation of the reduced gradient requires one solve of
the state equation forward in time, and one backward solve of the ad-joint equation. The state enters into the adjoint equation, requiring the storage of a full 4D data set. If Newton-CG methods are used, two additional trajectories have to be stored. To get numerical results which are accurate enough, in many case very fine discretizations in time and space are necessary, which leads to a significant amount of data to be stored and transmitted to mass storage. Lossy compression methods were
developed to overcome the storage problem by reducing the accuracy of the stored trajectories. The inexact data induces errors in the reduced gradient and reduced Hessian. In this paper, we analyze the influence of such a lossy trajectory compression method on Newton-CG methods for optimal control of parabolic PDEs and design an adaptive strategy for choosing appropriate quantization tolerances.
For the solution of optimal control problems governed by nonlinear parabolic PDEs, methods working on the reduced objective functional are often employed to avoid a full spatio-temporal discretization of the problem. The evaluation of the reduced gradient requires one solve of
the state equation forward in time, and one backward solve of the ad-joint equation. The state enters into the adjoint equation, requiring the storage of a full 4D data set. If Newton-CG methods are used, two additional trajectories have to be stored. To get numerical results which are accurate enough, in many case very fine discretizations in time and space are necessary, which leads to a significant amount of data to be stored and transmitted to mass storage. Lossy compression methods were
developed to overcome the storage problem by reducing the accuracy of the stored trajectories. The inexact data induces errors in the reduced gradient and reduced Hessian. In this paper, we analyze the influence of such a lossy trajectory compression method on Newton-CG methods for optimal control of parabolic PDEs and design an adaptive strategy for choosing appropriate quantization tolerances.
Solvers for partial differential equations (PDE) are one of the cornerstones of computational science. For large problems, they involve huge amounts of data that needs to be stored and transmitted on all levels of the memory hierarchy. Often, bandwidth is the limiting factor due to relatively small arithmetic intensity, and increasingly so due to the growing disparity between computing power and bandwidth. Consequently, data compression techniques have been investigated and tailored towards the specific requirements of PDE solvers during the last decades. This paper surveys data compression challenges and corresponding solution approaches for PDE problems, covering all levels of the memory hierarchy from mass storage up to main memory. Exemplarily, we illustrate concepts at particular methods, and give references to alternatives.
The paper provides a detailed analysis of a short step interior point algorithm applied to linear control constrained optimal control problems. Using an affine invariant local norm and an inexact Newton corrector, the well-known convergence results from finite dimensional linear programming can be extended to the infinite dimensional setting of optimal control. The present work complements a recent paper of Weiser and Deuflhard, where convergence rates have not been derived. The choice of free parameters, i.e. the corrector accuracy and the number of corrector steps, is discussed.
Kaskade 7 is a finite element toolbox for the solution of stationary or transient systems of partial differential equations, aimed at supporting application-oriented research in numerical analysis and scientific computing. The library is written in C++ and is based on the Dune interface. The code is independent of spatial dimension and works with different grid managers. An important feature is the mix-and-match approach to discretizing systems of PDEs with different ansatz and test spaces for all variables.
We describe the mathematical concepts behind the library as well as its structure, illustrating its use at several examples on the way.
Kaskade 7 is a finite element toolbox for the solution of stationary or transient systems of partial differential equations, aimed at supporting application-oriented research in numerical analysis and scientific computing. The library is written in C++ and is based on the \textsc{Dune} interface. The code is independent of spatial dimension and works with different grid managers. An important feature is the mix-and-match approach to discretizing systems of PDEs with different ansatz and test spaces for all variables.
We describe the mathematical concepts behind the library as well as its structure, illustrating its use at several examples on the way.
Inside Finite Elements
(2016)
All relevant implementation aspects of finite element methods are discussed in this book. The focus is on algorithms and data structures as well as on their concrete implementation. Theory is covered as far as it gives insight into the construction of algorithms.Throughout the exercises a complete FE-solver for scalar 2D problems will be implemented in Matlab/Octave.
Fast nonlinear programming methods following the all-at-once approach usually employ Newton's method for solving linearized Karush-Kuhn-Tucker (KKT) systems. In nonconvex problems, the Newton direction is only guaranteed to be a descent direction if the Hessian of the Lagrange function is positive definite on the nullspace of the active constraints, otherwise some modifications to Newton's method are necessary. This condition can be verified using the signs of the KKT's eigenvalues (inertia), which are usually available from direct solvers for the arising linear saddle point problems. Iterative solvers are mandatory for very large-scale problems, but in general do not provide the inertia. Here we present a preconditioner based on a multilevel incomplete $LBL^T$ factorization, from which an approximation of the inertia can be obtained. The suitability of the heuristics for application in optimization methods is verified on an interior point method applied to the CUTE and COPS test problems, on large-scale 3D PDE-constrained optimal control problems, as well as 3D PDE-constrained optimization in biomedical cancer hyperthermia treatment planning. The efficiency of the preconditioner is demonstrated on convex and nonconvex problems with $150^3$ state variables and $150^2$ control variables, both subject to bound constraints.
The growing discrepancy between CPU computing power and memory bandwidth drives more and more numerical algorithms into a bandwidth-
bound regime. One example is the overlapping Schwarz smoother, a highly effective building block for iterative multigrid solution of elliptic equations with higher order finite elements. Two options of reducing the required
memory bandwidth are sparsity exploiting storage layouts and representing matrix entries with reduced precision in floating point or fixed point
format. We investigate the impact of several options on storage demand and contraction rate, both analytically in the context of subspace correction methods and numerically at an example of solid mechanics. Both perspectives agree on the favourite scheme: fixed point representation of Cholesky factors in nested dissection storage.
The growing discrepancy between CPU computing power and memory bandwidth drives more and more numerical algorithms into a bandwidth-bound regime. One example is the overlapping Schwarz smoother, a highly effective building block for iterative multigrid solution of elliptic equations with higher order finite elements. Two options of reducing the required memory bandwidth are sparsity exploiting storage layouts and representing matrix entries with reduced precision in floating point or fixed point format. We investigate the impact of several options on storage demand and contraction rate, both analytically in the context of subspace correction methods and numerically at an example of solid mechanics. Both perspectives agree on the favourite scheme: fixed point representation of Cholesky factors in nested dissection storage.
Solving partial differential equations on unstructured grids is a cornerstone of engineering and scientific computing. Nowadays, heterogeneous parallel platforms with CPUs, GPUs, and FPGAs enable energy-efficient and computationally demanding simulations. We developed the HighPerMeshes C++-embedded Domain-Specific Language (DSL) for bridging the abstraction gap between the mathematical and algorithmic formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different parallel programming and runtime models on the other hand. Thus, the HighPerMeshes DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HighPer-Meshes DSL, and demonstrate its usage with three examples, a Poisson and monodomain problem, respectively, solved by the continuous finite element method, and the discontinuous Galerkin method for Maxwell’s equation. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters is presented. Finally, the achievable performance and scalability are demonstrated for a typical example problem on a multi-core CPU cluster.
A primal-dual interior point method for optimal control problems with PDE constraints is considered. The algorithm is directly applied to the infinite dimensional problem. Existence and convergence of the central path are analyzed. Numerical results from an inexact continuation method applied to a model problem are shown.
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
Spectral deferred correction methods for solving stiff ODEs are known to converge rapidly towards the collocation limit solution on equidistant grids, but show a much less favourable contraction on non-equidistant grids such as
Radau-IIa points. We interprete SDC methods as fixed point iterations for the collocation system and propose new DIRK-type sweeps for stiff problems based on purely linear algebraic considerations. Good convergence is recovered also
on non-equidistant grids. The properties of different variants are explored on a couple of numerical examples.
Spectral deferred correction methods for solving stiff ODEs are known to converge rapidly towards the collocation limit solution on equidistant grids, but show a much less favourable contraction on non-equidistant grids such as
Radau-IIa points. We interprete SDC methods as fixed point iterations for the collocation system and propose new DIRK-type sweeps for stiff problems based on purely linear algebraic considerations. Good convergence is recovered also
on non-equidistant grids. The properties of different variants are explored on a couple of numerical examples.
Geometric predicates are at the core of many algorithms, such as the construction of Delaunay triangulations, mesh processing and spatial relation tests.
These algorithms have applications in scientific computing, geographic information systems and computer-aided design.
With floating-point arithmetic, these geometric predicates can incur round-off errors that may lead to incorrect results and inconsistencies, causing computations to fail.
This issue has been addressed using a combination of exact arithmetic for robustness and floating-point filters to mitigate the computational cost of exact computations.
The implementation of exact computations and floating-point filters can be a difficult task, and code generation tools have been proposed to address this.
We present a new C++ meta-programming framework for the generation of fast, robust predicates for arbitrary geometric predicates based on polynomial expressions.
We combine and extend different approaches to filtering, branch reduction, and overflow avoidance that have previously been proposed.
We show examples of how this approach produces correct results for data sets that could lead to incorrect predicate results with naive implementations.
Our benchmark results demonstrate that our implementation surpasses state-of-the-art implementations.
Cardiac electrograms are an important tool to study the spread of excitation waves inside the heart, which in turn underlie muscle contraction. Electrograms can be used to analyse the dynamics of these waves, e.g. in fibrotic tissue. In computational models, these analyses can be done with greater detail than during minimally invasive in vivo procedures. Whilst homogenised models have been used to study electrogram genesis, such analyses have not yet been done in cellularly resolved models. Such high resolution may be required to develop a thorough understanding of the mechanisms behind abnormal excitation patterns leading to arrhythmias. In this study, we derived electrograms from an excitation propagation simulation in the Extracellular, Membrane, Intracellular (EMI) model, which represents these three domains explicitly in the mesh. We studied the effects of the microstructural excitation dynamics on electrogram genesis and morphology. We found that electrograms are sensitive to the myocyte alignment and connectivity, which translates into micro-fractionations in the electrograms.
Efficient numerical methods for simulating cardiac electrophysiology with cellular resolution
(2023)
The cardiac extracellular-membrane-intracellular (EMI) model enables the precise geometrical representation and resolution of aggregates of individual myocytes. As a result, it not only yields more accurate simulations of cardiac excitation compared to homogenized models but also presents the challenge of solving much larger problems. In this paper, we introduce recent advancements in three key areas: (i) the creation of artificial, yet realistic grids, (ii) efficient higher-order time stepping achieved by combining low-overhead spatial adaptivity on the algebraic level with progressive spectral deferred correction methods, and (iii) substructuring domain decomposition preconditioners tailored to address the complexities of heterogeneous problem structures. The efficiency gains of these proposed methods are demonstrated through numerical results on cardiac meshes of different sizes.
Aims. Detection and quantification of myocardial scars are helpful both for diagnosis of heart diseases and for building personalized simulation models. Scar tissue is generally characterized by a different conduction of electrical excitation. We aim at estimating conductivity-related parameters from endocardial mapping data, in particular the conductivity tensor. Solving this inverse problem requires computationally expensive monodomain simulations on fine discretizations. Therefore, we aim at accelerating the estimation using a multilevel method combining electrophysiology models of different complexity, namely the monodomain and the eikonal model.
Methods. Distributed parameter estimation is performed by minimizing the misfit between simulated and measured electrical activity on the endocardial surface, subject to the monodomain model and regularization, leading to a constrained optimization problem. We formulate this optimization problem, including the modeling of scar tissue and different regularizations, and design an efficient iterative solver. We consider monodomain grid hierarchies and monodomain-eikonal model hierarchies in a recursive multilevel trust-region method.
Results. From several numerical examples, both the efficiency of the method and the estimation quality, depending on the data, are investigated. The multilevel solver is significantly faster than a comparable single level solver. Endocardial mapping data of realistic density appears to be just sufficient to provide quantitatively reasonable estimates of location, size, and shape of scars close to the endocardial surface.
Conclusion. In several situations, scar reconstruction based on eikonal and monodomain models differ significantly, suggesting the use of the more accurate but more expensive monodomain model for this purpose. Still, eikonal models can be utilized to accelerate the computations considerably, enabling the use of complex electrophysiology models for estimating myocardial scars from endocardial mapping data.
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
Pulse thermography of concrete structures is used in civil engineering for detecting voids, honeycombing and delamination. The physical situation is readily modeled by Fourier's law. Despite the simplicity of the PDE structure, quantitatively realistic numerical 3D simulation faces two major obstacles. First, the short heating pulse induces a thin boundary layer at the heated surface which encapsulates all information and therefore has to be resolved faithfully. Even with adaptive mesh refinement techniques, obtaining useful accuracies requires an unsatisfactorily fine discretization. Second, bulk material parameters and boundary conditions are barely known exactly. We address both issues by a semi-analytic reformulation of the heat transport problem and by parameter identification. Numerical results are compared with measurements of test specimens.
Pulse thermography of concrete structures is used in civil engineering for detecting voids, honeycombing and delamination. The physical situation is readily modeled by Fourier's law. Despite the simplicity of the PDE structure, quantitatively realistic numerical 3D simulation faces two major obstacles. First, the short heating pulse induces a thin boundary layer at the heated surface which encapsulates all information and therefore has to be resolved faithfully. Even with adaptive mesh refinement techniques, obtaining useful accuracies requires an unsatisfactorily fine discretization. Second, bulk material parameters and boundary conditions are barely known exactly. We address both issues by a semi-analytic reformulation of the heat transport problem and by parameter identification. Numerical results are compared with measurements of test specimens.
The biodomain model of cardioelectric excitation consists of a reaction‐diffusion equation, an elliptic algebraic constraint, and a set of pointwise ODEs. Fast reaction enforces small time steps, such that for common mesh sizes the reaction‐diffusion equation is easily solved implicitly due to a dominating mass matrix. In contrast, the elliptic constraint does not benefit from small time steps and requires a comparably expensive solution. We propose a delayed residual compensation that improves the solution of the elliptic constraint and thus alleviates the need for long iteration times.
Pulse thermography is a non-destructive testing method based on infrared imaging of transient thermal patterns. Heating the surface of the structure under test for a short period of time generates a non-stationary temperature distribution and thus a thermal contrast between the defect and the sound material. Due to measurement noise, preprocessing of the experimental data is necessary, before reconstruction algorithms can be applied. We propose a decomposition of the measured temperature into Green's function solutions to eliminate noise.
Pulse thermography is a non-destructive testing method based on infrared imaging of transient thermal patterns. Heating the surface of the structure under test for a short period of time generates a non-stationary temperature distribution and thus a thermal contrast between the defect and the sound material. Due to measurement noise, preprocessing of the experimental data is necessary, before reconstruction algorithms can be applied. We propose a decomposition of the measured temperature into Green's function solutions to eliminate noise.
Container Adaptors
(1999)
The C++ standard template library has many useful containers for data. The standard library includes two adpators, queue, and stack. The authors have extended this model along the lines of relational database semantics. Sometimes the analogy is striking, and we will point it out occasionally. An adaptor allows the standard algorithms to be used on a subset or modification of the data without having to copy the data elements into a new container. The authors provide many useful adaptors which can be used together to produce interesting views of data in a container.
Solvers for partial differential equations (PDEs) are one of the cornerstones of computational science. For large problems, they involve huge amounts of data that need to be stored and transmitted on all levels of the memory hierarchy. Often, bandwidth is the limiting factor due to the relatively small arithmetic intensity, and increasingly due to the growing disparity between computing power and bandwidth. Consequently, data compression techniques have been investigated and tailored towards the specific requirements of PDE solvers over the recent decades. This paper surveys data compression challenges and discusses examples of corresponding solution approaches for PDE problems, covering all levels of the memory hierarchy from mass storage up to the main memory. We illustrate concepts for particular methods, with examples, and give references to alternatives.