Filtern
Erscheinungsjahr
Dokumenttyp
- Artikel (64)
- ZIB-Report (52)
- Konferenzbeitrag (18)
- Beitrag Sammelband (9)
- Buchkapitel (3)
- Buch (Monographie) (2)
- Bericht (1)
- Software (1)
Sprache
- Englisch (150) (entfernen)
Gehört zur Bibliographie
- nein (150)
Schlagworte
- optimal control (18)
- interior point methods in function space (5)
- interior point methods (4)
- trajectory storage (4)
- discretization error (3)
- finite elements (3)
- flight planning (3)
- free flight (3)
- lossy compression (3)
- shortest path (3)
Institut
- Numerical Mathematics (122)
- Computational Medicine (100)
- Modeling and Simulation of Complex Processes (32)
- Visual Data Analysis (12)
- Visual and Data-centric Computing (12)
- Network Optimization (8)
- Therapy Planning (8)
- Mathematics for Life and Materials Science (7)
- ZIB Allgemein (7)
- Distributed Algorithms and Supercomputing (3)
The C++ standard template library has many useful containers for data. The standard library includes two adpators, queue, and stack. The authors have extended this model along the lines of relational database semantics. Sometimes the analogy is striking, and we will point it out occasionally. An adaptor allows the standard algorithms to be used on a subset or modification of the data without having to copy the data elements into a new container. The authors provide many useful adaptors which can be used together to produce interesting views of data in a container.
Statistical methods to design computer experiments usually rely on a Gaussian process (GP) surrogate model, and typically aim at selecting design points (combinations of algorithmic and model parameters) that minimize the average prediction variance, or maximize the prediction accuracy for the hyperparameters of the GP surrogate.
In many applications, experiments have a tunable precision, in the sense that one software parameter controls the tradeoff between accuracy and computing time (e.g., mesh size in FEM simulations or number of Monte-Carlo samples).
We formulate the problem of allocating a budget of computing time over a finite set of candidate points for the goals mentioned above. This is a continuous optimization problem, which is moreover convex whenever the tradeoff function accuracy vs. computing time is concave.
On the other hand, using non-concave weight functions can help to identify sparse designs. In addition, using sparse kernel approximations drastically reduce the cost per iteration of the multiplicative weights updates that can be used to solve this problem.
Temperature-based estimation of time of death (ToD) can be per-
formed either with the help of simple phenomenological models of corpse
cooling or with detailed mechanistic (thermodynamic) heat transfer mod-
els. The latter are much more complex, but allow a higher accuracy of
ToD estimation as in principle all relevant cooling mechanisms can be
taken into account.
The potentially higher accuracy depends on the accuracy of tissue and
environmental parameters as well as on the geometric resolution. We in-
vestigate the impact of parameter variations and geometry representation
on the estimated ToD based on a highly detailed 3D corpse model, that
has been segmented and geometrically reconstructed from a computed to-
mography (CT) data set, differentiating various organs and tissue types.
From that we identify the most crucial parameters to measure or estimate,
and obtain a local uncertainty quantifcation for the ToD.
Temperature-based estimation of time of death (ToD) can be per-
formed either with the help of simple phenomenological models of corpse
cooling or with detailed mechanistic (thermodynamic) heat transfer mod-
els. The latter are much more complex, but allow a higher accuracy of
ToD estimation as in principle all relevant cooling mechanisms can be
taken into account.
The potentially higher accuracy depends on the accuracy of tissue and
environmental parameters as well as on the geometric resolution. We in-
vestigate the impact of parameter variations and geometry representation
on the estimated ToD based on a highly detailed 3D corpse model, that
has been segmented and geometrically reconstructed from a computed to-
mography (CT) data set, differentiating various organs and tissue types.
In several inital value problems with particularly expensive right hand side evaluation or implicit step computation, there is a trade-off between accuracy and computational effort. We consider inexact spectral deferred correction (SDC) methods for solving such initial value problems. SDC methods are interpreted as fixed point iterations and, due to their corrective iterative nature, allow to exploit the accuracy-work-tradeoff for a reduction of the total computational effort. On one hand we derive error models bounding the total error in terms of the evaluation errors. On the other hand, we define work models describing the computational effort in terms of the evaluation accuracy. Combining both, a theoretically optimal local tolerance selection is worked out by minimizing the total work subject to achieving the requested tolerance. The properties of optimal local tolerances and the predicted efficiency gain compared to simpler heuristics, and a reasonable practical performance, are illustrated on simple numerical examples.
In several inital value problems with particularly expensive right hand side computation, there is a trade-off between accuracy and computational effort in evaluating the right hand sides. We consider inexact spectral deferred correction (SDC) methods for solving such non-stiff initial value problems. SDC methods are interpreted as fixed point iterations and, due to their corrective iterative nature, allow to exploit the accuracy-work-tradeoff for a reduction of the total computational effort. On one hand we derive an error model bounding the total error in terms of the right hand side evaluation errors. On the other hand, we define work models describing the computational effort in terms of the evaluation accuracy. Combining both, a theoretically optimal tolerance selection is worked out by minimizing the total work subject to achieving the requested tolerance.
Temperature-based time of death estimation (TTDE) using simulation methods such as the finite element (FE) method promises higher accuracy and broader applicability in nonstandard cooling scenarios than established phenomenological methods. Their accuracy depends crucially on the simulation model to capture the actual situation. The model fidelity in turn hinges on the representation of the corpse’s anatomy in form of computational meshes as well as on the thermodynamic parameters.
While inaccuracies in anatomy representation due to coarse mesh resolution are known to have a minor impact on the estimated time of death, the sensitivity with respect to larger differences in the anatomy has so far not been studied. We assess this sensitivity by comparing four independently generated and vastly different anatomical models in terms of the estimated time of death in an identical cooling scenario. In order to isolate the impact of shape variation, the models are scaled to a reference size, and the possible impact of measurement location variation is excluded explicitly, which gives a lower bound on the impact of anatomy on the estimated time of death.
Solving PDEs on unstructured grids is a cornerstone of engineering and scientific computing. Heterogeneous parallel platforms, including CPUs, GPUs, and FPGAs, enable energy-efficient and computationally demanding simulations.
In this article, we introduce the HPM C++-embedded DSL that bridges the abstraction gap between the mathematical formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different programming models on the other hand.
Thus, the HPM DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HPM DSL, and demonstrate its usage with three examples. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters, is presented.
A code generator and a matching back end allow the acceleration of HPM code with GPUs. Finally, the achievable performance and scalability are demonstrated for different example problems.
The paper addresses primal interior point method for state constrained PDE optimal control problems. By a Lavrentiev regularization, the state constraint is transformed to a mixed control-state constraint with bounded Lagrange multiplier. Existence and convergence of the central path are established, and linear convergence of a short-step pathfollowing method is shown. The behaviour of the regularizations are demonstrated by numerical examples.
A new approach to the numerical solution of optimal control problems including control and state constraints is presented. Like hybrid methods, the approach aims at combining the advantages of direct and indirect methods. Unlike hybrid methods, however, our method is directly based on interior-point concepts in function space --- realized via an adaptive multilevel scheme applied to the complementarity formulation and numerical continuation along the central path. Existence of the central path and its continuation towards the solution point is analyzed in some theoretical detail. An adaptive stepsize control with respect to the duality gap parameter is worked out in the framework of affine invariant inexact Newton methods. Finally, the performance of a first version of our new type of algorithm is documented by the successful treatment of the well-known intricate windshear problem.
A thorough convergence analysis of the Control Reduced Interior Point Method in function space is performed. This recently proposed method is a primal interior point pathfollowing scheme with the special feature, that the control variable is eliminated from the optimality system. Apart from global linear convergence we show, that this method converges locally almost quadratically, if the optimal solution satisfies a function space analogue to a non-degeneracy condition. In numerical experiments we observe, that a prototype implementation of our method behaves in compliance with our theoretical results.
In optimal control problems with nonlinear time-dependent 3D PDEs, the computation of the reduced gradient by adjoint methods requires one solve of the state equation forward in time, and one backward solve of the adjoint equation. Since the state enters into the adjoint equation, the storage of a 4D discretization is necessary. We propose a lossy compression algorithm using a cheap predictor for the state data, with additional entropy coding of prediction errors. Analytical and numerical results indicate that compression factors around 30 can be obtained without exceeding the FE discretization error.
In optimal control problems with nonlinear time-dependent 3D PDEs, full 4D discretizations are usually prohibitive due to the storage requirement. For this reason gradient and quasi-Newton methods working on the reduced functional are often employed. The computation of the reduced gradient requires one solve of the state equation forward in time, and one backward solve of the adjoint equation. The state enters into the adjoint equation, again requiring the storage of a full 4D data set. We propose a lossy compression algorithm using an inexact but cheap predictor for the state data, with additional entropy coding of prediction errors. As the data is used inside a discretized, iterative algorithm, lossy coding maintaining an error bound is sufficient.
In optimal control problems with nonlinear time-dependent 3D PDEs, full 4D discretizations are usually prohibitive due to the storage requirement. For this reason gradient and quasi-Newton methods working on the reduced functional are often employed. The computation of the reduced gradient requires one solve of the state equation forward in time, and one backward solve of the adjoint equation. The state enters into the adjoint equation, again requiring the storage of a full 4D data set. We propose a lossy compression algorithm using an inexact but cheap predictor for the state data, with additional entropy coding of prediction errors. As the data is used inside a discretized, iterative algorithm, lossy coding maintaining an error bound is sufficient.
Spectral Deferred Correction methods for adaptive electro-mechanical coupling in cardiac simulation
(2014)
We investigate spectral deferred correction (SDC) methods for time stepping
and their interplay with spatio-temporal adaptivity, applied to the solution
of the cardiac electro-mechanical coupling model. This model consists
of the Monodomain equations, a reaction-diffusion system modeling the cardiac
bioelectrical activity, coupled with a quasi-static mechanical model describing
the contraction and relaxation of the cardiac muscle. The numerical
approximation of the cardiac electro-mechanical coupling is a challenging
multiphysics problem, because it exhibits very different spatial and temporal
scales. Therefore, spatio-temporal adaptivity is a promising approach
to reduce the computational complexity. SDC methods are simple iterative
methods for solving collocation systems. We exploit their flexibility for combining
them in various ways with spatio-temporal adaptivity. The accuracy
and computational complexity of the resulting methods are studied on some
numerical examples.
Spectral Deferred Correction methods for adaptive electro-mechanical coupling in cardiac simulation
(2017)
We investigate spectral deferred correction (SDC) methods for time stepping
and their interplay with spatio-temporal adaptivity, applied to the solution
of the cardiac electro-mechanical coupling model. This model consists
of the Monodomain equations, a reaction-diffusion system modeling the cardiac
bioelectrical activity, coupled with a quasi-static mechanical model describing
the contraction and relaxation of the cardiac muscle. The numerical
approximation of the cardiac electro-mechanical coupling is a challenging
multiphysics problem, because it exhibits very different spatial and temporal
scales. Therefore, spatio-temporal adaptivity is a promising approach
to reduce the computational complexity. SDC methods are simple iterative
methods for solving collocation systems. We exploit their flexibility for combining
them in various ways with spatio-temporal adaptivity. The accuracy
and computational complexity of the resulting methods are studied on some
numerical examples.