Refine
Year of publication
Document Type
- Article (67)
- ZIB-Report (52)
- In Proceedings (22)
- In Collection (9)
- Book (4)
- Book chapter (3)
- Software (2)
- Report (1)
Is part of the Bibliography
- no (160)
Keywords
- optimal control (18)
- interior point methods in function space (5)
- interior point methods (4)
- trajectory storage (4)
- discretization error (3)
- finite elements (3)
- flight planning (3)
- free flight (3)
- lossy compression (3)
- shortest path (3)
Institute
- Numerical Mathematics (134)
- Computational Medicine (103)
- Modeling and Simulation of Complex Processes (40)
- Visual and Data-centric Computing (13)
- Visual Data Analysis (12)
- Mathematics for Life and Materials Science (8)
- Network Optimization (8)
- Therapy Planning (8)
- ZIB Allgemein (7)
- Distributed Algorithms and Supercomputing (3)
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
The locality of solution features in cardiac electrophysiology simulations calls for adaptive methods. Due to the overhead incurred by established mesh refinement and coarsening, however, such approaches failed in accelerating the computations. Here we investigate a different route to spatial adaptivity that is based on nested subset selection for algebraic degrees of freedom in spectral deferred correction methods. This combination of algebraic adaptivity and iterative solvers for higher order collocation time stepping realizes a multirate integration with minimal overhead. This leads to moderate but significant speedups in both monodomain and cell-by-cell models of cardiac excitation, as demonstrated at four numerical examples.
Aims. Detection and quantification of myocardial scars are helpful both for diagnosis of heart diseases and for building personalized simulation models. Scar tissue is generally characterized by a different conduction of electrical excitation. We aim at estimating conductivity-related parameters from endocardial mapping data, in particular the conductivity tensor. Solving this inverse problem requires computationally expensive monodomain simulations on fine discretizations. Therefore, we aim at accelerating the estimation using a multilevel method combining electrophysiology models of different complexity, namely the monodomain and the eikonal model.
Methods. Distributed parameter estimation is performed by minimizing the misfit between simulated and measured electrical activity on the endocardial surface, subject to the monodomain model and regularization, leading to a constrained optimization problem. We formulate this optimization problem, including the modeling of scar tissue and different regularizations, and design an efficient iterative solver. We consider monodomain grid hierarchies and monodomain-eikonal model hierarchies in a recursive multilevel trust-region method.
Results. From several numerical examples, both the efficiency of the method and the estimation quality, depending on the data, are investigated. The multilevel solver is significantly faster than a comparable single level solver. Endocardial mapping data of realistic density appears to be just sufficient to provide quantitatively reasonable estimates of location, size, and shape of scars close to the endocardial surface.
Conclusion. In several situations, scar reconstruction based on eikonal and monodomain models differ significantly, suggesting the use of the more accurate but more expensive monodomain model for this purpose. Still, eikonal models can be utilized to accelerate the computations considerably, enabling the use of complex electrophysiology models for estimating myocardial scars from endocardial mapping data.
Efficient numerical methods for simulating cardiac electrophysiology with cellular resolution
(2023)
The cardiac extracellular-membrane-intracellular (EMI) model enables the precise geometrical representation and resolution of aggregates of individual myocytes. As a result, it not only yields more accurate simulations of cardiac excitation compared to homogenized models but also presents the challenge of solving much larger problems. In this paper, we introduce recent advancements in three key areas: (i) the creation of artificial, yet realistic grids, (ii) efficient higher-order time stepping achieved by combining low-overhead spatial adaptivity on the algebraic level with progressive spectral deferred correction methods, and (iii) substructuring domain decomposition preconditioners tailored to address the complexities of heterogeneous problem structures. The efficiency gains of these proposed methods are demonstrated through numerical results on cardiac meshes of different sizes.
Cardiac electrograms are an important tool to study the spread of excitation waves inside the heart, which in turn underlie muscle contraction. Electrograms can be used to analyse the dynamics of these waves, e.g. in fibrotic tissue. In computational models, these analyses can be done with greater detail than during minimally invasive in vivo procedures. Whilst homogenised models have been used to study electrogram genesis, such analyses have not yet been done in cellularly resolved models. Such high resolution may be required to develop a thorough understanding of the mechanisms behind abnormal excitation patterns leading to arrhythmias. In this study, we derived electrograms from an excitation propagation simulation in the Extracellular, Membrane, Intracellular (EMI) model, which represents these three domains explicitly in the mesh. We studied the effects of the microstructural excitation dynamics on electrogram genesis and morphology. We found that electrograms are sensitive to the myocyte alignment and connectivity, which translates into micro-fractionations in the electrograms.
Flight planning, the computation of optimal routes in view of flight time and fuel consumption under given weather conditions, is traditionally done by finding globally shortest paths in a predefined airway network. Free flight trajectories, not restricted to a network, have the potential to reduce the costs significantly, and can be computed using locally convergent continuous optimal control methods.
Hybrid methods that start with a discrete global search and refine with a fast continuous local optimization combine the best properties of both approaches, but rely on a good switchover, which requires error estimates for discrete paths relative to continuous trajectories.
Based on vertex density and local complete connectivity, we derive localized and a priori bounds for the flight time of discrete paths relative to the optimal continuous trajectory, and illustrate their properties on a set of benchmark problems. It turns out that localization improves the error bound by four orders of magnitude, but still leaves ample opportunities for tighter bounds using a posteriori error estimators.
In recent years, the use of simulation-based digital twins for monitoring and assessment of complex mechanical systems has greatly expanded. Their potential to increase the information obtained from limited data makes them an invaluable tool for a broad range of real-world applications. Nonetheless, there usually exists a discrepancy between the predicted response and the measurements of the system once built. One of the main contributors to this difference in addition to miscalibrated model parameters is the model error. Quantifying this socalled model bias (as well as proper values for the model parameters) is critical for the reliable performance of digital twins. Model bias identification is ultimately an inverse problem where information from measurements is used to update the original model. Bayesian formulations can tackle this task. Including the model bias as a parameter to be inferred enables the use of a Bayesian framework to obtain a probability distribution that represents the uncertainty between the measurements and the model. Simultaneously, this procedure can be combined with a classic parameter updating scheme to account for the trainable parameters in the original model.
This study evaluates the effectiveness of different model bias identification approaches based on Bayesian inference methods. This includes more classical approaches such as direct parameter estimation using MCMC in a Bayesian setup, as well as more recent proposals such as stat-FEM or orthogonal Gaussian Processes. Their potential use in digital twins, generalization capabilities, and computational cost is extensively analyzed.
Geometric predicates are at the core of many algorithms, such as the construction of Delaunay triangulations, mesh processing and spatial relation tests.
These algorithms have applications in scientific computing, geographic information systems and computer-aided design.
With floating-point arithmetic, these geometric predicates can incur round-off errors that may lead to incorrect results and inconsistencies, causing computations to fail.
This issue has been addressed using a combination of exact arithmetic for robustness and floating-point filters to mitigate the computational cost of exact computations.
The implementation of exact computations and floating-point filters can be a difficult task, and code generation tools have been proposed to address this.
We present a new C++ meta-programming framework for the generation of fast, robust predicates for arbitrary geometric predicates based on polynomial expressions.
We combine and extend different approaches to filtering, branch reduction, and overflow avoidance that have previously been proposed.
We show examples of how this approach produces correct results for data sets that could lead to incorrect predicate results with naive implementations.
Our benchmark results demonstrate that our implementation surpasses state-of-the-art implementations.
Spectral deferred correction methods for solving stiff ODEs are known to converge rapidly towards the collocation limit solution on equidistant grids, but show a much less favourable contraction on non-equidistant grids such as
Radau-IIa points. We interprete SDC methods as fixed point iterations for the collocation system and propose new DIRK-type sweeps for stiff problems based on purely linear algebraic considerations. Good convergence is recovered also
on non-equidistant grids. The properties of different variants are explored on a couple of numerical examples.
Spectral deferred correction methods for solving stiff ODEs are known to converge rapidly towards the collocation limit solution on equidistant grids, but show a much less favourable contraction on non-equidistant grids such as
Radau-IIa points. We interprete SDC methods as fixed point iterations for the collocation system and propose new DIRK-type sweeps for stiff problems based on purely linear algebraic considerations. Good convergence is recovered also
on non-equidistant grids. The properties of different variants are explored on a couple of numerical examples.
We consider Large Deformation Diffeomorphic Metric Mapping of general $m$-currents. After stating an optimization algorithm in the function space of admissable morph generating velocity fields, two innovative aspects in this framework are presented and numerically investigated: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Second, we directly compute the temporal evolution of discrete $m$-current attributes.
A primal-dual interior point method for optimal control problems with PDE constraints is considered. The algorithm is directly applied to the infinite dimensional problem. Existence and convergence of the central path are analyzed. Numerical results from an inexact continuation method applied to a model problem are shown.
The paper deals with the multilevel solution of {\em elliptic} partial differential equations (PDEs) in a {\em finite element} setting: {\em uniform ellipticity} of the PDE then goes with {\em strict monotonicity} of the derivative of a nonlinear convex functional. A {\em Newton multigrid method} is advocated, wherein {\em linear residuals} are evaluated within the multigrid method for the computation of the Newton corrections. The globalization is performed by some {\em damping} of the ordinary Newton corrections. The convergence results and the algorithm may be regarded as an extension of those for local Newton methods presented recently by the authors. An {\em affine conjugate} global convergence theory is given, which covers both the {\em exact} Newton method (neglecting the occurrence of approximation errors) and {\em inexact} Newton--Galerkin methods addressing the crucial issue of accuracy matching between discretization and iteration errors. The obtained theoretical results are directly applied for the construction of adaptive algorithms. Finally, illustrative numerical experiments with a~{\sf NEWTON--KASKADE} code are documented.
This C++ code implements a cell-by-cell model of cardiac excitation using a piecewise-continuous finite element discretization and spectral deferred correction time stepping. The code is based on the Kaskade 7 finite element toolbox and forms a prototype for the µCarp code to be implemented in the Microcard project.
Solving partial differential equations on unstructured grids is a cornerstone of engineering and scientific computing. Nowadays, heterogeneous parallel platforms with CPUs, GPUs, and FPGAs enable energy-efficient and computationally demanding simulations. We developed the HighPerMeshes C++-embedded Domain-Specific Language (DSL) for bridging the abstraction gap between the mathematical and algorithmic formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different parallel programming and runtime models on the other hand. Thus, the HighPerMeshes DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HighPer-Meshes DSL, and demonstrate its usage with three examples, a Poisson and monodomain problem, respectively, solved by the continuous finite element method, and the discontinuous Galerkin method for Maxwell’s equation. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters is presented. Finally, the achievable performance and scalability are demonstrated for a typical example problem on a multi-core CPU cluster.
This paper introduces a novel hybrid mathematical modeling approach that effectively couples Partial Differential Equations (PDEs) with Ordinary Differential Equations (ODEs), exemplified through the simulation of epidemiological processes. The hybrid model aims to integrate the spatially detailed representation of disease dynamics provided by PDEs with the computational efficiency of ODEs. In the presented epidemiological use-case, this integration allows for the rapid assessment of public health interventions and the potential impact of infectious diseases across large populations. We discuss the theoretical formulation of the hybrid PDE-ODE model, including the governing equations and boundary conditions. The model's capabilities are demonstrated through detailed simulations of disease spread in synthetic environments and real-world scenarios, specifically focusing on the regions of Lombardy, Italy, and Berlin, Germany. Results indicate that the hybrid model achieves a balance between computational speed and accuracy, making it a valuable tool for policymakers in real-time decision-making and scenario analysis in epidemiology and potentially in other fields requiring similar modeling approaches.
The growing discrepancy between CPU computing power and memory bandwidth drives more and more numerical algorithms into a bandwidth-
bound regime. One example is the overlapping Schwarz smoother, a highly effective building block for iterative multigrid solution of elliptic equations with higher order finite elements. Two options of reducing the required
memory bandwidth are sparsity exploiting storage layouts and representing matrix entries with reduced precision in floating point or fixed point
format. We investigate the impact of several options on storage demand and contraction rate, both analytically in the context of subspace correction methods and numerically at an example of solid mechanics. Both perspectives agree on the favourite scheme: fixed point representation of Cholesky factors in nested dissection storage.
The growing discrepancy between CPU computing power and memory bandwidth drives more and more numerical algorithms into a bandwidth-bound regime. One example is the overlapping Schwarz smoother, a highly effective building block for iterative multigrid solution of elliptic equations with higher order finite elements. Two options of reducing the required memory bandwidth are sparsity exploiting storage layouts and representing matrix entries with reduced precision in floating point or fixed point format. We investigate the impact of several options on storage demand and contraction rate, both analytically in the context of subspace correction methods and numerically at an example of solid mechanics. Both perspectives agree on the favourite scheme: fixed point representation of Cholesky factors in nested dissection storage.
Fast nonlinear programming methods following the all-at-once approach usually employ Newton's method for solving linearized Karush-Kuhn-Tucker (KKT) systems. In nonconvex problems, the Newton direction is only guaranteed to be a descent direction if the Hessian of the Lagrange function is positive definite on the nullspace of the active constraints, otherwise some modifications to Newton's method are necessary. This condition can be verified using the signs of the KKT's eigenvalues (inertia), which are usually available from direct solvers for the arising linear saddle point problems. Iterative solvers are mandatory for very large-scale problems, but in general do not provide the inertia. Here we present a preconditioner based on a multilevel incomplete $LBL^T$ factorization, from which an approximation of the inertia can be obtained. The suitability of the heuristics for application in optimization methods is verified on an interior point method applied to the CUTE and COPS test problems, on large-scale 3D PDE-constrained optimal control problems, as well as 3D PDE-constrained optimization in biomedical cancer hyperthermia treatment planning. The efficiency of the preconditioner is demonstrated on convex and nonconvex problems with $150^3$ state variables and $150^2$ control variables, both subject to bound constraints.
Inside Finite Elements
(2016)
All relevant implementation aspects of finite element methods are discussed in this book. The focus is on algorithms and data structures as well as on their concrete implementation. Theory is covered as far as it gives insight into the construction of algorithms.Throughout the exercises a complete FE-solver for scalar 2D problems will be implemented in Matlab/Octave.
Kaskade 7 is a finite element toolbox for the solution of stationary or transient systems of partial differential equations, aimed at supporting application-oriented research in numerical analysis and scientific computing. The library is written in C++ and is based on the \textsc{Dune} interface. The code is independent of spatial dimension and works with different grid managers. An important feature is the mix-and-match approach to discretizing systems of PDEs with different ansatz and test spaces for all variables.
We describe the mathematical concepts behind the library as well as its structure, illustrating its use at several examples on the way.
Kaskade 7 is a finite element toolbox for the solution of stationary or transient systems of partial differential equations, aimed at supporting application-oriented research in numerical analysis and scientific computing. The library is written in C++ and is based on the Dune interface. The code is independent of spatial dimension and works with different grid managers. An important feature is the mix-and-match approach to discretizing systems of PDEs with different ansatz and test spaces for all variables.
We describe the mathematical concepts behind the library as well as its structure, illustrating its use at several examples on the way.
The paper provides a detailed analysis of a short step interior point algorithm applied to linear control constrained optimal control problems. Using an affine invariant local norm and an inexact Newton corrector, the well-known convergence results from finite dimensional linear programming can be extended to the infinite dimensional setting of optimal control. The present work complements a recent paper of Weiser and Deuflhard, where convergence rates have not been derived. The choice of free parameters, i.e. the corrector accuracy and the number of corrector steps, is discussed.
The finite element setting for nonlinear elliptic PDEs directly leads to the minimization of convex functionals. Uniform ellipticity of the underlying PDE shows up as strict convexity of the arising nonlinear functional. The paper analyzes computational variants of Newton's method for convex optimization in an affine conjugate setting, which reflects the appropriate affine transformation behavior for this class of problems. First, an affine conjugate Newton--Mysovskikh type theorem on the local quadratic convergence of the exact Newton method in Hilbert spaces is given. It can be easily extended to inexact Newton methods, where the inner iteration is only approximately solved. For fixed finite dimension, a special implementation of a Newton--PCG algorithm is worked out. In this case, the suggested monitor for the inner iteration guarantees quadratic convergence of the outer iteration. In infinite dimensional problems, the PCG method may be just formally replaced by any Galerkin method such as FEM for linear elliptic problems. Instead of the algebraic inner iteration errors we now have to control the FE discretization errors, which is a standard task performed within any adaptive multilevel method. A careful study of the information gain per computational effort leads to the result that the quadratic convergence mode of the Newton--Galerkin algorithm is the best mode for the fixed dimensional case, whereas for an adaptive variable dimensional code a special linear convergence mode of the algorithm is definitely preferable. The theoretical results are then illustrated by numerical experiments with a {\sf NEWTON--KASKADE} algorithm.
Solvers for partial differential equations (PDE) are one of the cornerstones of computational science. For large problems, they involve huge amounts of data that needs to be stored and transmitted on all levels of the memory hierarchy. Often, bandwidth is the limiting factor due to relatively small arithmetic intensity, and increasingly so due to the growing disparity between computing power and bandwidth. Consequently, data compression techniques have been investigated and tailored towards the specific requirements of PDE solvers during the last decades. This paper surveys data compression challenges and corresponding solution approaches for PDE problems, covering all levels of the memory hierarchy from mass storage up to main memory. Exemplarily, we illustrate concepts at particular methods, and give references to alternatives.
For the solution of optimal control problems governed by nonlinear parabolic PDEs, methods working on the reduced objective functional are often employed to avoid a full spatio-temporal discretization of the problem. The evaluation of the reduced gradient requires one solve of
the state equation forward in time, and one backward solve of the ad-joint equation. The state enters into the adjoint equation, requiring the storage of a full 4D data set. If Newton-CG methods are used, two additional trajectories have to be stored. To get numerical results which are accurate enough, in many case very fine discretizations in time and space are necessary, which leads to a significant amount of data to be stored and transmitted to mass storage. Lossy compression methods were
developed to overcome the storage problem by reducing the accuracy of the stored trajectories. The inexact data induces errors in the reduced gradient and reduced Hessian. In this paper, we analyze the influence of such a lossy trajectory compression method on Newton-CG methods for optimal control of parabolic PDEs and design an adaptive strategy for choosing appropriate quantization tolerances.
For the solution of optimal control problems governed by nonlinear parabolic PDEs, methods working on the reduced objective functional are often employed to avoid a full spatio-temporal discretization of the problem. The evaluation of the reduced gradient requires one solve of
the state equation forward in time, and one backward solve of the ad-joint equation. The state enters into the adjoint equation, requiring the storage of a full 4D data set. If Newton-CG methods are used, two additional trajectories have to be stored. To get numerical results which are accurate enough, in many case very fine discretizations in time and space are necessary, which leads to a significant amount of data to be stored and transmitted to mass storage. Lossy compression methods were
developed to overcome the storage problem by reducing the accuracy of the stored trajectories. The inexact data induces errors in the reduced gradient and reduced Hessian. In this paper, we analyze the influence of such a lossy trajectory compression method on Newton-CG methods for optimal control of parabolic PDEs and design an adaptive strategy for choosing appropriate quantization tolerances.
This paper presents efficient computational techniques for solving an optimization problem in cardiac defibrillation governed by the monodomain equations. Time-dependent electrical currents injected at different spatial positions act as the control. Inexact Newton-CG methods are used, with reduced gradient computation by adjoint solves. In order to reduce the computational complexity, adaptive mesh refinement for state and adjoint equations is performed. To reduce the high storage and bandwidth demand imposed by adjoint gradient and Hessian-vector evaluations, a lossy compression technique for storing trajectory data is applied. An adaptive choice of quantization tolerance based on error estimates is developed in order to ensure convergence. The efficiency of the proposed approach is demonstrated on numerical examples.
This paper presents efficient computational techniques for solving an optimization problem in cardiac defibrillation governed by the monodomain equations. Time-dependent electrical currents injected at different spatial positions act as the control. Inexact Newton-CG methods are used, with reduced gradient computation by adjoint solves. In order to reduce the computational complexity, adaptive mesh refinement for state and adjoint equations is performed. To reduce the high storage and bandwidth demand imposed by adjoint gradient and Hessian-vector evaluations, a lossy compression technique for storing trajectory data is applied. An adaptive choice of quantization tolerance based on error estimates is developed in order to ensure convergence. The efficiency of the proposed approach is demonstrated on numerical examples.
Parallel in time methods for solving initial value problems are a means to increase the parallelism of numerical simulations. Hybrid parareal schemes interleaving the parallel in time iteration with an iterative solution of the individual time steps are among the most efficient methods for general nonlinear problems. Despite the hiding of communication time behind computation, communication has in certain situations a significant impact on the total runtime. Here we present strict, yet no sharp, error bounds for hybrid parareal methods with inexact communication due to lossy data compression, and derive theoretical estimates of the impact of compression on parallel efficiency of the algorithms. These and some computational experiments suggest that compression is a viable method to make hybrid parareal schemes robust with respect to low bandwidth setups.
Parallel in time methods for solving initial value problems are a means to increase the parallelism of numerical simulations. Hybrid parareal schemes interleaving the parallel in time iteration with an iterative solution of the individual time steps are among the most efficient methods for general nonlinear problems. Despite the hiding of communication time behind computation, communication has in certain situations a significant impact on the total runtime. Here we present strict, yet no sharp, error bounds for hybrid parareal methods with inexact communication due to lossy data compression, and derive theoretical estimates of the impact of compression on parallel efficiency of the algorithms. These and some computational experiments suggest that compression is a viable method to make hybrid parareal schemes robust with respect to low bandwidth setups.
This paper surveys the required mathematics for a typical challenging problem from computational medicine, the cancer therapy planning in deep regional hyperthermia. In the course of many years of close cooperation with clinics, the medical problem gave rise to quite a number of subtle mathematical problems, part of which had been unsolved when the common project started. Efficiency of numerical algorithms, i.e. computational speed and monitored reliability, play a decisive role for the medical treatment. Off-the-shelf software had turned out to be not sufficient to meet the requirements of medicine. Rather, new mathematical theory as well as new numerical algorithms had to be developed. In order to make our algorithms useful in the clinical environment, new visualization software, a virtual lab, including 3D geometry processing of individual virtual patients had to be designed and implemented. Moreover, before the problems could be attacked by numerical algorithms, careful mathematical modelling had to be done. Finally, parameter identification and constrained optimization for the PDEs had to be newly analyzed and realized over the individual patient's geometry. Our new techniques had an impact on the specificity of the individual patients' treatment and on the construction of an improved hyperthermia applicator.
This paper surveys the required mathematics for a typical challenging problem from computational medicine, the cancer therapy planning in deep regional hyperthermia. In the course of many years of close cooperation with clinics, the medical problem gave rise to quite a number of subtle mathematical problems, part of which had been unsolved when the common project started. Efficiency of numerical algorithms, i.e. computational speed and monitored reliability, play a decisive role for the medical treatment. Off-the-shelf software had turned out to be not sufficient to meet the requirements of medicine. Rather, new mathematical theory as well as new numerical algorithms had to be developed. In order to make our algorithms useful in the clinical environment, new visualization software, a virtual lab, including 3D geometry processing of individual virtual patients had to be designed and implemented. Moreover, before the problems could be attacked by numerical algorithms, careful mathematical modelling had to be done. Finally, parameter identification and constrained optimization for the PDEs had to be newly analyzed and realized over the individual patient's geometry. Our new techniques had an impact on the specificity of the individual patients' treatment and on the construction of an improved hyperthermia applicator.
Simulation-based digital twins must provide accurate, robust and reliable digital representations of their physical counterparts. Quantifying the uncertainty in their predictions plays, therefore, a key role in making better-informed decisions that impact the actual system. The update of the simulation model based on data must be then carefully implemented. When applied to complex standing structures such as bridges, discrepancies between the computational model and the real system appear as model bias, which hinders the trustworthiness of the digital twin and increases its uncertainty. Classical Bayesian updating approaches aiming to infer the model parameters often fail at compensating for such model bias, leading to overconfident and unreliable predictions. In this paper, two alternative model bias identification approaches are evaluated in the context of their applicability to digital twins of bridges. A modularized version of Kennedy and O'Hagan's approach and another one based on Orthogonal Gaussian Processes are compared with the classical Bayesian inference framework in a set of representative benchmarks. Additionally, two novel extensions are proposed for such models: the inclusion of noise-aware kernels and the introduction of additional variables not present in the computational model through the bias term. The integration of such approaches in the digital twin corrects the predictions, quantifies their uncertainty, estimates noise from unknown physical sources of error and provides further insight into the system by including additional pre-existing information without modifying the computational model.
Multigrid methods for two-body contact problems are mostly
based on special mortar discretizations, nonlinear Gauss-Seidel
solvers, and solution-adapted coarse grid spaces. Their high
computational efficiency comes at the cost of a complex implementation
and a nonsymmetric master-slave discretization of the nonpenetration
condition. Here we investigate an alternative symmetric and
overconstrained segment-to-segment contact formulation that
allows for a simple implementation based on standard multigrid and
a symmetric treatment of contact boundaries, but leads to nonunique
multipliers. For the solution of the arising quadratic programs,
we propose augmented Lagrangian multigrid with overlapping block
Gauss-Seidel smoothers. Approximation and convergence properties are studied numerically at standard test problems.
Epidemiological models can not only be used to forecast the course of a pandemic like COVID-19, but also to propose and design non-pharmaceutical interventions such as school and work closing. In general, the design of optimal policies leads to nonlinear optimization problems that can be solved by numerical algorithms. Epidemiological models come in different complexities, ranging from systems of simple ordinary differential equations (ODEs) to complex agent-based models (ABMs). The former allow a fast and straightforward optimization, but are limited in accuracy, detail, and parameterization, while the latter can resolve spreading processes in detail, but are extremely expensive to optimize. We consider policy optimization in a prototypical situation modeled as both ODE and ABM, review numerical optimization approaches, and propose a heterogeneous multilevel approach based on combining a fine-resolution ABM and a coarse ODE model. Numerical experiments, in particular with respect to convergence speed, are given for illustrative examples.
Recently developed Concentric Tube Continuum Robots (CTCRs) are widely exploited in, for example in minimally invasive surgeries which involve navigating inside narrow body cavities close to sensitive regions. These CTCRs can be controlled by extending and rotating the tubes in order to reach a target point or perform some task. The robot must deviate as little as possible from this narrow space and avoid damaging neighbouring tissue. We consider \emph{open-loop} optimal control of CTCRs parameterized over pseudo-time, primarily aiming at minimizing the robot's working volume during its motion. External loads acting on the system like tip loads or contact with tissues are not considered here. We also discussed the inclusion of tip's orientation in the optimal framework to perform some tasks. We recall a quaternion-based formulation of the robot configuration, discuss discretization, develop optimization objectives addressing different criteria, and investigate their impact on robot path planning for several numerical examples. This optimal framework can be applied to any backbone based continuum robots.