Refine
Year of publication
Document Type
- Article (64) (remove)
Language
- English (64)
Is part of the Bibliography
- no (64)
Keywords
Institute
- Numerical Mathematics (55)
- Computational Medicine (40)
- Modeling and Simulation of Complex Processes (16)
- Visual Data Analysis (6)
- Visual and Data-centric Computing (6)
- Mathematics for Life and Materials Science (4)
- Therapy Planning (4)
- Network Optimization (3)
- Computational Nano Optics (2)
- Distributed Algorithms and Supercomputing (2)
Temperature-based time of death estimation (TTDE) using simulation methods such as the finite element (FE) method promises higher accuracy and broader applicability in nonstandard cooling scenarios than established phenomenological methods. Their accuracy depends crucially on the simulation model to capture the actual situation. The model fidelity in turn hinges on the representation of the corpse’s anatomy in form of computational meshes as well as on the thermodynamic parameters.
While inaccuracies in anatomy representation due to coarse mesh resolution are known to have a minor impact on the estimated time of death, the sensitivity with respect to larger differences in the anatomy has so far not been studied. We assess this sensitivity by comparing four independently generated and vastly different anatomical models in terms of the estimated time of death in an identical cooling scenario. In order to isolate the impact of shape variation, the models are scaled to a reference size, and the possible impact of measurement location variation is excluded explicitly, which gives a lower bound on the impact of anatomy on the estimated time of death.
Geometric predicates are at the core of many algorithms, such as the construction of Delaunay triangulations, mesh processing and spatial relation tests.
These algorithms have applications in scientific computing, geographic information systems and computer-aided design.
With floating-point arithmetic, these geometric predicates can incur round-off errors that may lead to incorrect results and inconsistencies, causing computations to fail.
This issue has been addressed using a combination of exact arithmetic for robustness and floating-point filters to mitigate the computational cost of exact computations.
The implementation of exact computations and floating-point filters can be a difficult task, and code generation tools have been proposed to address this.
We present a new C++ meta-programming framework for the generation of fast, robust predicates for arbitrary geometric predicates based on polynomial expressions.
We combine and extend different approaches to filtering, branch reduction, and overflow avoidance that have previously been proposed.
We show examples of how this approach produces correct results for data sets that could lead to incorrect predicate results with naive implementations.
Our benchmark results demonstrate that our implementation surpasses state-of-the-art implementations.
Adaptive Gaussian Process Regression for Efficient Building of Surrogate Models in Inverse Problems
(2023)
In a task where many similar inverse problems must be solved, evaluating costly simulations is impractical. Therefore, replacing the model y with a surrogate model y(s) that can be evaluated quickly leads to a significant speedup. The approximation quality of the surrogate model depends strongly on the number, position, and accuracy of the sample points. With an additional finite computational budget, this leads to a problem of (computer) experimental design. In contrast to the selection of sample points, the trade-off between accuracy and effort has hardly been studied systematically. We therefore propose an adaptive algorithm to find an optimal design in terms of
position and accuracy. Pursuing a sequential design by incrementally appending the computational budget leads to a convex and constrained optimization problem. As a surrogate, we construct a Gaussian process regression model. We measure the global approximation error in terms of its impact on the accuracy of the identified parameter and aim for a uniform absolute tolerance, assuming that y(s) is computed by finite element calculations. A priori error estimates and a coarse estimate of computational effort relate the expected improvement of the surrogate model error to computational effort, resulting in the most efficient combination of sample point and evaluation tolerance. We also allow for improving the accuracy of already existing sample points by continuing previously truncated finite element solution procedures.
Epidemiological models can not only be used to forecast the course of a pandemic like COVID-19, but also to propose and design non-pharmaceutical interventions such as school and work closing. In general, the design of optimal policies leads to nonlinear optimization problems that can be solved by numerical algorithms. Epidemiological models come in different complexities, ranging from systems of simple ordinary differential equations (ODEs) to complex agent-based models (ABMs). The former allow a fast and straightforward optimization, but are limited in accuracy, detail, and parameterization, while the latter can resolve spreading processes in detail, but are extremely expensive to optimize. We consider policy optimization in a prototypical situation modeled as both ODE and ABM, review numerical optimization approaches, and propose a heterogeneous multilevel approach based on combining a fine-resolution ABM and a coarse ODE model. Numerical experiments, in particular with respect to convergence speed, are given for illustrative examples.