65-XX NUMERICAL ANALYSIS
Refine
Year of publication
Document Type
- ZIB-Report (35)
- Master's Thesis (5)
- Doctoral Thesis (2)
- Software (2)
Is part of the Bibliography
- no (44)
Keywords
- finite element method (5)
- optimal control (4)
- reduced basis method (4)
- rigorous optical modeling (3)
- Algebraic multigrid (2)
- Automatic Differentiation (2)
- Lipschitz Continuity (2)
- Monte Carlo (2)
- Nonsmooth (2)
- PDE (2)
Institute
- Numerical Mathematics (23)
- Computational Medicine (7)
- Mathematical Optimization (6)
- Modeling and Simulation of Complex Processes (6)
- Visual and Data-centric Computing (6)
- Computational Nano Optics (5)
- Visual Data Analysis (5)
- Computational Systems Biology (4)
- Energy Network Optimization (3)
- Mathematical Optimization Methods (3)
Convergence Properties of Newton's Method for Globally Optimal Free Flight Trajectory Optimization
(2023)
The algorithmic efficiency of Newton-based methods for Free Flight Trajectory Optimization is heavily influenced by the size of the domain of convergence. We provide numerical evidence that the convergence radius is much larger in practice than what the theoretical worst case bounds suggest. The algorithm can be further improved by a convergence-enhancing domain decomposition.
This paper is concerned with the exact solution of mixed-integer programs (MIPs) over the rational numbers, i.e., without any roundoff errors and error tolerances. Here, one computational bottleneck that should be avoided whenever possible is to employ large-scale symbolic computations. Instead it is often possible to use safe directed rounding methods, e.g., to generate provably correct dual bounds. In this work, we continue to leverage this paradigm and extend an exact branch-and-bound framework by separation routines for safe cutting planes, based on the approach first introduced by Cook, Dash, Fukasawa, and Goycoolea in 2009. Constraints are aggregated safely using approximate dual multipliers from an LP solve, followed by mixed-integer rounding to generate provably valid, although slightly weaker inequalities. We generalize this approach to problem data that is not representable in floating-point arithmetic, add routines for controlling the encoding length of the resulting cutting planes, and show how these cutting planes can be verified according to the VIPR certificate standard. Furthermore, we analyze the performance impact of these cutting planes in the context of an exact MIP framework, showing that we can solve 21.5% more instances and reduce solving times by 26.8% on the MIPLIB 2017 benchmark test set.
This thesis examines how taking into account surface to surface radiation
impacts the cooling process in general. We formulate the non local bound-
ary condition after introducing the general setting for the cooling model. In
section 3, the mathematical description of the radiative heat transfer is dis-
cussed. We cover the implementation of the radiative matrix in section 4,
which is followed by a brief explanation of the radiative matrix’s structure
and several techniques to dealing with the accompanying challenges.
We investigate the importance of radiative heat transport by applying the
given approach to a two-dimensional geometry and computing the ensuing
cooling curves. We compare the findings of our computation to those ac-
quired from experiment conducted and find that they are extremely similar.
There is a considerable difference (of about 35%) in the time of cooling of the
surface where there is a possibility of influence of radiation from the second
surface to that of the surface with no influence at all. Although it is possible
to infer that heat convection plays a role in the total result, this has yet to be
proved. However, one can clearly see the significance of the surface to surface
radiative heat transfer on these parts confirming the research question posed
at the begining. The effect of the surface to surface radiative heat transfer
has an influence on the resulting cooling time and should be considered in
the model.
UG is a generic framework to parallelize branch-and-bound based solvers (e.g., MIP, MINLP, ExactIP) in a distributed or shared memory computing environment. It exploits the powerful performance of state-of-the-art "base solvers", such as SCIP, CPLEX, etc. without the need for base solver parallelization.
UG framework, ParaSCIP(ug[SCIP,MPI]) and FiberSCIP (ug[SCIP,Pthreads]) are available as a beta version.
v1.0.0: new documentation and cmake, generalization of ug framework, implementation of selfsplitrampup for fiber- and parascip, better memory and time limit handling.
Sampling rare events in metastable dynamical systems is often a computationally expensive task and one needs to resort to enhanced sampling methods such as importance sampling. Since we can formulate the problem of finding optimal importance sampling controls as a stochastic optimization problem, this then brings additional numerical challenges and the convergence of corresponding algorithms might as well suffer from metastabilty. In this article we address this issue by combining systematic control approaches with the heuristic adaptive metadynamics method. Crucially, we approximate the importance sampling control by a neural network, which makes the algorithm in principle feasible for high dimensional applications. We can numerically demonstrate in relevant metastable problems that our algorithm is more effective than previous attempts and that only the combination of the two approaches leads to a satisfying convergence and therefore to an efficient sampling in certain metastable settings.
In many applications, geodesic hierarchical models are adequate for the study of temporal observations. We employ such a model derived for manifold-valued data to Kendall's shape space. In particular, instead of the Sasaki metric, we adapt a functional-based metric, which increases the computational efficiency and does not require the implementation of the curvature tensor. We propose the corresponding variational time discretization of geodesics and employ the approach for longitudinal analysis of 2D rat skulls shapes as well as 3D shapes derived from an imaging study on osteoarthritis. Particularly, we perform hypothesis test and estimate the mean trends.
UG is a generic framework to parallelize branch-and-bound based solvers (e.g., MIP, MINLP, ExactIP) in a distributed or shared memory computing environment. It exploits the powerful performance of state-of-the-art "base solvers", such as SCIP, CPLEX, etc. without the need for base solver parallelization.
UG framework, ParaSCIP(ug[SCIP,MPI]) and FiberSCIP (ug[SCIP,Pthreads]) are available as a beta version. For MIP solving, ParaSCIP and FiberSCIP are well debugged and should be stable. For MINLP solving, they are relatively stable, but not as thoroughly debugged. This release version should handle branch-and-cut approaches where subproblems are defined by variable bounds and also by constrains for ug[SCIP,*] ParaSCIP and FiberSCIP). Therefore, problem classes other than MIP or MINLP can be handled, but they have not been tested yet.
v0.9.1: Update orbitope cip files.
In this thesis, adaptive algorithms in optimization under PDE constraints have been inves-
tigated. In its application, the aim of optimization is to increase the longevity of implants,
namely the hip joint implant, and in doing so to minimize stress shielding and simultaneously minimize the influence of locally high stresses, that, above a threshold value, are malign to the bone structure. Under the constraint of the equilibrium of forces, describing an elastodynamic setup, coupled with a contact inequality condition, a computationally expensive problem formulation is given.
The first step to make the solution of the given problem possible and efficient was to change over to the spatial equilibrium equation, thus rendering an elastostatic setup. Subsequently the intrinsically dynamic motions – trajectories in the load domain – were converted to the static setup. Thus, the trajectories are marginalized to the load domain and characterized with probability distributions. Therefore the solving of the PDE constraint, the contact problem, is simplified.
Yet in the whole optimization process, the solving of the PDE, the spatial equilibrium equation together with the contact condition has the most expensive contribution still and hence needed further reduction. This was achieved by application of Kriging interpolation to the load responses of the integrated distribution of stress difference and the maximum stresses. The interpolation of the two response surfaces only needs comparatively few PDE solves to set up the models. Moreover, the Kriging models can be adaptively extended by sequentially adding sample-response pairs. For this the Kriging inherent variance is used to estimate ideal new sample locations with maximum variance values. In doing so, the overall interpolation variance and therefore the interpolation error is reduced.
For the integration of the integrated stress differences and penalty values on the relative high dimensional load domain Monte Carlo integration was implemented, averting the curse of dimension. Here, the motion’s probability distribution combined with patient specific data of motion frequencies is taken advantage of, making obsolete the use of the otherwise necessary importance sampling.
Throughout the optimization, the FE-discretization error and the subsequently attached errors entering the solution process via PDE discretization and approximative
solving of the PDE, Kriging interpolation and Monte Carlo integration need to decrease. While the FE-discretization error and the solution of the elastostatic contact problem were assumed precise enough, numerics showed, that the interpolation and integration errors can be controlled by adaptive refinement of the respective methods. For this purpose comparable error quantities for the particular algorithms were introduced and effectively put to use.
For the implant position’s optimization, the derivative of the objective function was derived using the implicit function theorem. As the FE-discretization changes with implant position modifications big enough, a special line search had to be used to deal with the discontinuities in the objective function.
The interplay and performance of the subalgorithms was demonstrated numerically on a reduced 2D setup of a hip joint with and without the implant. Consequently the load domain and the control variable were also limited to the 2D case.
The SCIP Optimization Suite provides a collection of software packages for
mathematical optimization centered around the constraint integer programming frame-
work SCIP. This paper discusses enhancements and extensions contained in version 7.0
of the SCIP Optimization Suite. The new version features the parallel presolving library
PaPILO as a new addition to the suite. PaPILO 1.0 simplifies mixed-integer linear op-
timization problems and can be used stand-alone or integrated into SCIP via a presolver
plugin. SCIP 7.0 provides additional support for decomposition algorithms. Besides im-
provements in the Benders’ decomposition solver of SCIP, user-defined decomposition
structures can be read, which are used by the automated Benders’ decomposition solver
and two primal heuristics. Additionally, SCIP 7.0 comes with a tree size estimation
that is used to predict the completion of the overall solving process and potentially
trigger restarts. Moreover, substantial performance improvements of the MIP core were
achieved by new developments in presolving, primal heuristics, branching rules, conflict
analysis, and symmetry handling. Last, not least, the report presents updates to other
components and extensions of the SCIP Optimization Suite, in particular, the LP solver
SoPlex and the mixed-integer semidefinite programming solver SCIP-SDP.
This master thesis investigates the use and behaviour of a mixed finite element formulation for the simulation of garments.
The garment is modelled as an isotropic shell and is related to its mid-surface by energetic degeneration. Based on this, an energy functional is constructed, which contains the deformation and the mid-surface vector as degree of freedom. It is then shown why this problem does not correspond to a saddle point problem, but to a non-convex energy minimization.
The implementation of the energy minimization takes place with the ZIB-internal FE framework Kaskade7.4, whereby a geometric linear and different geometric non-linear problems are examined, whereby for a selected, non-linear example a comparison is made with an existing implementation on basis of Morley elements.
The further evaluations include the analysis of the quantitative and qualitative results, the used solution method, the behaviour of the system energy as well as the used CPU time.
The determination of time of death is one of the central tasks in forensic medicine. A standard method of time of death estimation elies on matching temperature measurements of the corpse with a post-mortem cooling model. In addition to widely used empirical post-mortem models, modelling based on a precise mathematical simulation of the cooling process have been gaining popularity.
The simulation based cooling models and the resulting time of death estimates dependon a large variety of parameters. These include hermal properties for different body tissue types, environmental conditions such as temperature and air flow, and the presence of clothing and coverings. In this thesis we focus on a specific arameter - the contact between corpse and underground - and investigate its influence on the time of death estimation. Resulting we aim to answer the question whether it is necessary to consider contact mechanics in the underlying mathematical cooling model.
Quantitative PA tomography of high resolution 3-D images: experimental validation in tissue phantoms
(2019)
Quantitative photoacoustic tomography aims recover the spatial distribution of absolute chromophore concentrations and their ratios from deep tissue, high-resolution images. In this study, a model-based inversion scheme based on a Monte-Carlo light transport model is experimentally validated on 3-D multispectral images of a tissue phantom acquired using an all-optical scanner with a planar detection geometry. A calibrated absorber allowed scaling of the measured data during the inversion, while an acoustic correction method was employed to compensate the effects of limited view detection. Chromophore- and fluence-dependent step sizes and Adam optimization were implemented to achieve rapid convergence. High resolution 3-D maps of absolute concentrations and their ratios were recovered with high accuracy. Potential applications of this method include quantitative functional and molecular photoacoustic tomography of deep tissue in preclinical and clinical studies.
In many applications, geodesic hierarchical models are adequate for the study of temporal observations. We employ such a model derived for manifold-valued data to Kendall's shape space. In particular, instead of the Sasaki metric, we adapt a functional-based metric, which increases the computational efficiency and does not require the implementation of the curvature tensor. We propose the corresponding variational time discretization of geodesics and apply the approach for the estimation of group trends and statistical testing of 3D shapes derived from an open access longitudinal imaging study on osteoarthritis.
Quantitative photoacoustic tomography aims to recover maps of the local concentrations of tissue chromophores from multispectral images. While model-based inversion schemes are promising approaches, major challenges to their practical implementation include the unknown fluence distribution and the scale of the inverse problem. This paper describes an inversion scheme based on a radiance Monte Carlo model and an adjoint-assisted gradient optimization that incorporates fluence-dependent step sizes and adaptive moment estimation. The inversion is shown to recover absolute chromophore concentrations, blood oxygen saturation and the Grüneisen parameter from in silico 3D phantom images for different radiance approximations. The scattering coefficient was assumed to be homogeneous and known a priori.
The SCIP Optimization Suite provides a collection of software packages for mathematical optimization centered around the constraint integer programming framework SCIP. This paper discusses enhancements and extensions contained in version 6.0 of the SCIP Optimization Suite. Besides performance improvements of the MIP and MINLP core achieved by new primal heuristics and a new selection criterion for cutting planes, one focus of this release are decomposition algorithms. Both SCIP and the automatic decomposition solver GCG now include advanced functionality for performing Benders’ decomposition in a generic framework. GCG’s detection loop for structured matrices and the coordination of pricing routines for Dantzig-Wolfe decomposition has been significantly revised for greater flexibility. Two SCIP extensions have been added
to solve the recursive circle packing problem by a problem-specific column generation scheme and to demonstrate the use of the new Benders’ framework for stochastic capacitated facility location. Last, not least, the report presents updates and additions to the other components and extensions of the SCIP Optimization Suite: the LP solver SoPlex, the modeling language Zimpl, the parallelization framework UG, the Steiner tree solver SCIP-Jack, and the mixed-integer semidefinite programming solver SCIP-SDP.
In this article we analyze a generalized trapezoidal rule for initial value problems with piecewise smooth right hand side F:IR^n -> IR^n.
When applied to such a problem, the classical trapezoidal rule suffers from a loss of accuracy if the solution trajectory intersects a non-differentiability of F. In such a situation the investigated generalized trapezoidal rule achieves a higher convergence order than the classical method. While the asymptotic behavior of the generalized method was investigated in a previous work, in the present article we develop the algorithmic structure for efficient implementation strategies
and estimate the actual computational cost of the latter.
Moreover, energy preservation of the generalized trapezoidal rule is proved for Hamiltonian systems with piecewise linear right hand side.
We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations.
This article describes new features and enhanced algorithms made available in version 5.0 of the SCIP Optimization Suite. In its central component, the constraint integer programming solver SCIP, remarkable performance improvements have been achieved for solving mixed-integer linear and nonlinear programs. On MIPs, SCIP 5.0 is about 41 % faster than SCIP 4.0 and over twice as fast on instances that take at least 100 seconds to solve. For MINLP, SCIP 5.0 is about 17 % faster overall and 23 % faster on instances that take at least 100 seconds to solve. This boost is due to algorithmic advances in several parts of the solver such as cutting plane generation and management, a new adaptive coordination of large neighborhood search heuristics, symmetry handling, and strengthened McCormick relaxations for bilinear terms in MINLPs. Besides discussing the theoretical background and the implementational aspects of these developments, the report describes recent additions for the other software packages connected to SCIP, in particular for the LP solver SoPlex, the Steiner tree solver SCIP-Jack, the MISDP solver SCIP-SDP, and the parallelization framework UG.
In this article we analyze a generalized trapezoidal rule for initial value problems with piecewise smooth right hand side \(F:R^n \to R^n\) based on a generalization of algorithmic differentiation. When applied to such a problem, the classical trapezoidal rule suffers from a loss of accuracy if the solution trajectory intersects a nondifferentiability of \(F\). The advantage of the proposed generalized trapezoidal rule is threefold: Firstly, we can achieve a higher convergence order than with the classical method. Moreover, the method is energy preserving for piecewise linear Hamiltonian systems. Finally, in analogy to the classical case we derive a third order interpolation polynomial for the numerical trajectory. In the smooth case the generalized rule reduces to the classical one. Hence, it is a proper extension of the classical theory. An error estimator is given and numerical results are presented.
An automatic adaptive importance sampling algorithm for molecular dynamics in reaction coordinates
(2017)
In this article we propose an adaptive importance sampling scheme for dynamical quantities of high dimensional complex systems which are metastable. The main idea of this article is to combine a method coming from Molecular Dynamics Simulation, Metadynamics, with a theorem from stochastic analysis, Girsanov's theorem. The proposed algorithm has two advantages compared to a standard estimator of dynamic quantities: firstly, it is possible to produce estimators with a lower variance and, secondly, we can speed up the sampling. One of the main problems for building importance sampling schemes for metastable systems is to find the metastable region in order to manipulate the potential accordingly. Our method circumvents this problem by using an assimilated version of the Metadynamics algorithm and thus creates a non-equilibrium dynamics which is used to sample the equilibrium quantities.
It is shown how piecewise differentiable functions \(F: R^n → R^m\) that are defined by evaluation programs can be approximated locally by a piecewise linear model based on a pair of sample points x̌ and x̂. We show that the discrepancy between function and model at any point x is of the bilinear order O(||x − x̌|| ||x − x̂||). This is a little surprising since x ∈ R^n may vary over the whole Euclidean space, and we utilize only two function samples F̌ = F(x̌) and F̂ = F(x̂), as well as the intermediates computed during their evaluation. As an application of the piecewise linearization procedure we devise a generalized Newton’s method based on successive piecewise linearization and prove for it sufficient conditions for convergence and convergence rates equaling those of semismooth Newton. We conclude with the derivation of formulas for the numerically stable implementation of the aforedeveloped piecewise linearization methods.
In many applications one is interested to compute transition probabilities of a Markov chain.
This can be achieved by using Monte Carlo methods with local or global sampling points.
In this article, we analyze the error by the difference in the $L^2$ norm between the true transition probabilities and the approximation
achieved through a Monte Carlo method.
We give a formula for the error for Markov chains with locally computed sampling points. Further, in the case of reversible Markov chains, we will deduce a formula for the error when sampling points are computed globally.
We will see that in both cases the error itself can be approximated with Monte Carlo methods.
As a consequence of the result, we will derive surprising properties of reversible Markov chains.
This article extends the framework of Bayesian inverse problems in infinite-dimensional parameter spaces, as advocated by Stuart (Acta Numer. 19:451–559, 2010) and others, to the case of a heavy-tailed prior measure in the family of stable distributions, such as an infinite-dimensional Cauchy distribution, for which polynomial moments are infinite or undefined. It is shown that analogues of the Karhunen–Loève expansion for square-integrable random variables can be used to sample such measures. Furthermore, under weaker regularity assumptions than those used to date, the Bayesian posterior measure is shown to depend Lipschitz continuously in the Hellinger metric upon perturbations of the misfit function and observed data.
Traditionally, Lagrangian fields such as finite-time Lyapunov exponents (FTLE)
are precomputed on a discrete grid and are ray casted afterwards. This, however,
introduces both grid discretization errors and sampling errors during ray marching.
In this work, we apply a progressive, view-dependent Monte Carlo-based approach
for the visualization of such Lagrangian fields in time-dependent flows. Our ap-
proach avoids grid discretization and ray marching errors completely, is consistent,
and has a low memory consumption. The system provides noisy previews that con-
verge over time to an accurate high-quality visualization. Compared to traditional
approaches, the proposed system avoids explicitly predefined fieldline seeding
structures, and uses a Monte Carlo sampling strategy named Woodcock tracking
to distribute samples along the view ray. An acceleration of this sampling strategy
requires local upper bounds for the FTLE values, which we progressively acquire
during the rendering. Our approach is tailored for high-quality visualizations of
complex FTLE fields and is guaranteed to faithfully represent detailed ridge surface
structures as indicators for Lagrangian coherent structures (LCS). We demonstrate
the effectiveness of our approach by using a set of analytic test cases and real-world numerical simulations.
One of the main goals of mathematical modelling in systems biology related to medical applications is to obtain patient-specific parameterisations and model predictions.
In clinical practice, however, the number of available measurements for single patients is usually limited due to time and cost restrictions. This hampers the process of making patient-specific predictions about the outcome of a treatment. On the other hand, data are often available for many patients, in particular if extensive clinical studies have been performed. Using these population data, we propose an iterative algorithm for contructing an informative prior distribution, which then serves as the basis for computing patient-specific posteriors and obtaining individual predictions. We demonsrate the performance of our method by applying it to a low-dimensional parameter estimation problem in a toy model as well as to a high-dimensional ODE model of the human menstrual cycle, which represents a typical example from systems biology modelling.
Optical 3D simulations in many-query and real-time contexts require new solution strategies. We study an adaptive, error controlled reduced basis method for solving parametrized time-harmonic optical scattering problems. Application fields are, among others, design and optimization problems of nano-optical devices as well as inverse problems for parameter reconstructions occuring e. g. in optical metrology. The reduced basis method presented here relies on a finite element modeling of the scattering problem with parametrization of materials, geometries and sources.
Reconstruction of photonic crystal geometries using a reduced basis method for nonlinear outputs
(2016)
Maxwell solvers based on the hp-adaptive finite element method allow for accurate geometrical modeling and high numerical accuracy. These features are indispensable for the optimization of optical properties or reconstruction of parameters through inverse processes. High computational complexity prohibits the evaluation of the solution for many parameters. We present a reduced basis method (RBM) for the time-harmonic electromagnetic scattering problem allowing to compute solutions for a parameter configuration orders of magnitude faster. The RBM allows to evaluate linear and nonlinear outputs of interest like Fourier transform or the enhancement of the electromagnetic field in milliseconds. We apply the RBM to compute light-scattering off two dimensional photonic crystal structures made of silicon and reconstruct geometrical parameters.
Model order reduction for the time-harmonic Maxwell equation applied to complex nanostructures
(2016)
Fields such as optical metrology and computational lithography require fast and efficient methods for solving
the time-harmonic Maxwell’s equation. Highly accurate geometrical modeling and numerical accuracy atcomputational costs are a prerequisite for any simulation study of complex nano-structured photonic devices.
We present a reduced basis method (RBM) for the time-harmonic electromagnetic scattering problem based
on the hp-adaptive finite element solver JCMsuite capable of handling geometric and non-geometric parameter
dependencies allowing for online evaluations in milliseconds. We apply the RBM to compute light-scatteringoptical wavelengths off periodic arrays of fin field-effect transistors (FinFETs) where geometrical properties such
as the width and height of the fin and gate can vary in a large range.
Rigorous optical simulations of 3-dimensional nano-photonic structures are an important tool in the analysis and optimization of scattering properties of nano-photonic devices or parameter reconstruction. To construct geometrically accurate models of complex structured nano-photonic devices the finite element method (FEM) is ideally suited due to its flexibility in the geometrical modeling and superior convergence properties. Reduced order models such as the reduced basis method (RBM) allow to construct self-adaptive, error-controlled, very low dimensional approximations for input-output relationships which can be evaluated orders of magnitude faster than the full model. This is advantageous in applications requiring the solution of Maxwell's equations for multiple parameters or a single parameter but in real time. We present a reduced basis method for 3D Maxwell's equations based on the finite element method which allows variations of geometric as well as material and frequency parameters. We demonstrate accuracy and efficiency of the method for a light scattering problem exhibiting a resonance in the electric field.
In many experimentally realized applications, e.g. photonic crystals, solar cells and light-emitting diodes, nano-photonic systems are coupled to a thick substrate layer, which in certain cases has to be included as a part of the optical system. The finite element method (FEM) yields rigorous, high accuracy solutions of full 3D vectorial Maxwell's equations [1] and allows for great flexibility and accuracy in the geometrical modelling. Time-harmonic FEM solvers have been combined with Fourier methods in domain decomposition algorithms to compute coherent solutions of these coupled system. [2,3] The basic idea of a domain decomposition approach lies in a decomposition of the domain into smaller subdomains, separate calculations of the solutions and coupling of these solutions on adjacent subdomains.
In experiments light sources are often not perfectly monochromatic and hence a comparision to simulation results might only be justified if the simulation results, which include interference patterns in the substrate, are spectrally averaged.
In this contribution we present a scattering matrix domain decomposition algorithm for Maxwell's equations based on FEM. We study its convergence and advantages in the context of optical simulations of silicon thin film multi-junction solar cells. This allows for substrate light-trapping to be included in optical simulations and leads to a more realistic estimation of light path enhancement factors in thin-film devices near the band edge.
Adaptive sampling strategies for efficient parameter scans in nano-photonic device simulations
(2014)
Rigorous optical simulations are an important tool in optimizing scattering properties of nano-photonic devices and are used, for example, in solar cell optimization. The finite element method (FEM) yields rigorous, time-harmonic, high accuracy solutions of the full 3D vectorial Maxwell's equations [1] and furthermore allows for great flexibility and accuracy in the geometrical modeling of these often complex shaped 3D nano-structures. A major drawback of frequency domain methods is the limitation of single frequency evaluations. For example the accurate computation of the short circuit current density of an amorphous silicon / micro-crystalline multi-junction thin film solar cell may require the solution of Maxwell's equations for over a hundred different wavelengths if an equidistant sampling strategy is employed. Also in optical metrology, wavelength scans are frequently used to reconstruct unknown geometrical and material properties of optical systems numerically from measured
scatterometric data.
In our contribution we present several adaptive numerical integration and sampling routines and study their efficiency in the context of the determination of generation rate profiles of solar cells. We show that these strategies lead to a reduction in the computational effort without loss of accuracy. We discuss the employment of tangential information in a Hermite interpolation scheme to achieve similar accuracy on coarser grids. We explore the usability of these strategies for scatterometry and solar cell simulations.
Bei der numerischen Lösung von Optimalsteuerungsproblemen mit
elliptischen partiellen Differentialgleichungen als Nebenbedingung
treten unvermeidlich Diskretisierungs- und Iterationsfehler auf.
Man ist aus Aufwandsgründen daran interessiert die dabei entstehenden
Fehler nicht sehr klein wählen zu müssen. In der Folge werden die linearisierten Nebenbedingungen in einem Composite-Step-Verfahren nicht exakt erfüllt. In dieser Arbeit wird der Einfluss dieser Ungenauigkeit auf das Konvergenzverhalten von Newton-Lagrange-Verfahren untersucht.
Dabei sollen mehrere einschlägige lokale Konvergenzresultate diskutiert werden.
Anschließend wird ein konkretes Composite-Step-Verfahren formuliert, in dem die Genauigkeit der
inneren Iterationsverfahren adaptiv gesteuert werden kann.
Am Ende der Arbeit wird an zwei Musterproblemen die hohe Übereinstimmung der analytischen
Voraussagen und der tatsächlichen Performanz der dargestellten Methoden demonstriert.
This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80 % in 10 years
A deterministic model of tuberculosis in sub-Saharan Africa in general and Cameroon in particular including lack of access to the treatment and weak diagnose capacity is designed and analyzed with respect to its transmission dynamics.
The model includes both frequency- and density-dependent transmissions. It is shown that the model is mathematically well-posed and epidemiologically reasonable. Solutions are non-negative and bounded whenever the initial values are non-negative.
A sensitivity analysis of model parameters is performed and most sensitive parameters of the model are identified using a state-of-the-art Gauss-Newton
Method. In particular, parameters representing the proportion of individuals having access to medical facilities have a large impact on the dynamics of the disease. It has been shown that an increase of these parameter values over the
time can significantly reduce the disease burden in the population within the next 15 years.
This thesis firstly presents a nonlinear extended deterministic model for the transmission dynamics of tuberculosis, based on realistic assumptions and data collected from the WHO. This model enables a comprehensive qualitative analysis of various aspects in the outbreak and control of tuberculosis in Sub-Saharan Africa countries and successfully reproduces the epidemiology of tuberculosis in Cameroon for the past (from 1994-2010). Some particular properties of the model and its solution have been presented using the comparison theorem applied to the theory of differential equations. The existence and the stability of a disease free equilibrium has been discussed using the Perron-Frobenius theorem and Metzler stable matrices.
Furthermore, we computed the basic reproduction number, i.e. the number of cases that one case generates on average over the course of its infectious period. Rigorous qualitative analysis of the model reveals that, in contrast to the model without reinfections, the full model with reinfection exhibits the phenomenon of backward bifurcation, where a stable disease-free equilibrium coexists with a stable endemic equilibrium when a certain threshold quantity, known as the basic reproduction ratio (R0), is less than unity. The global stability of the disease-free equilibrium has been discussed using the concepts of Lyapunov stability and bifurcation theory.
With the help of a sensitivity analysis using data of Cameroon, we identified the relevant parameters which play a key role for the transmission and the control of the disease. This was possible applying sophisticated numerical methods (POEM) developed at ZIB. Using advanced approaches for optimal control considering the costs for chemoprophylaxis, treatment and educational campaigns should provide a framework for designing realistic cost effective strategies with different intervention methods. The forward-backward sweep method has been used to solve the numerical optimal control problem. The numerical result of the optimal control problem reveals that combined effort in education and chemoprophylaxis may lead to a reduction of 80\% in the number of infected people in 10 years.
The mathematical and numerical approaches developed in this thesis could be similarly applied in many other Sub-Saharan countries where TB is a public health problem.
Bei der numerischen Lösung von Optimalsteuerungsproblemen mit
elliptischen partiellen Differentialgleichungen als Nebenbedingung
treten unvermeidlich Diskretisierungs- und Iterationsfehler auf.
Man ist aus Aufwandsgründen daran interessiert die dabei entstehenden
Fehler nicht sehr klein wählen zu müssen. In der Folge werden die linearisierten Nebenbedingungen in einem Composite-Step-Verfahren nicht exakt erfüllt. In dieser Arbeit wird der Einfluss dieser Ungenauigkeit auf das Konvergenzverhalten von Newton-Lagrange-Verfahren untersucht.
Dabei sollen mehrere einschlägige lokale Konvergenzresultate diskutiert werden.
Anschließend wird ein konkretes Composite-Step-Verfahren formuliert, in dem die Genauigkeit der
inneren Iterationsverfahren adaptiv gesteuert werden kann.
Am Ende der Arbeit wird an zwei Musterproblemen die hohe Übereinstimmung der analytischen
Voraussagen und der tatsächlichen Performanz der dargestellten Methoden demonstriert.
Spectral deferred correction methods for solving stiff ODEs are known to converge rapidly towards the collocation limit solution on equidistant grids, but show a much less favourable contraction on non-equidistant grids such as
Radau-IIa points. We interprete SDC methods as fixed point iterations for the collocation system and propose new DIRK-type sweeps for stiff problems based on purely linear algebraic considerations. Good convergence is recovered also
on non-equidistant grids. The properties of different variants are explored on a couple of numerical examples.
We present a level of detail method for trees based on ellipsoids and lines. We leverage the Expectation Maximization algorithm with a Gaussian Mixture Model to create a hierarchy of high-quality leaf clusterings, while the branches are simplified using agglomerative bottom-up clustering to preserve the connectivity. The simplification runs in a preprocessing step and requires no human interaction. For a fly by over and through a scene of 10k trees, our method renders on average at 40 ms/frame, up to 6 times faster than billboard clouds with comparable artifacts.
This paper surveys the required mathematics for a typical challenging problem from computational medicine, the cancer therapy planning in deep regional hyperthermia. In the course of many years of close cooperation with clinics, the medical problem gave rise to quite a number of subtle mathematical problems, part of which had been unsolved when the common project started. Efficiency of numerical algorithms, i.e. computational speed and monitored reliability, play a decisive role for the medical treatment. Off-the-shelf software had turned out to be not sufficient to meet the requirements of medicine. Rather, new mathematical theory as well as new numerical algorithms had to be developed. In order to make our algorithms useful in the clinical environment, new visualization software, a virtual lab, including 3D geometry processing of individual virtual patients had to be designed and implemented. Moreover, before the problems could be attacked by numerical algorithms, careful mathematical modelling had to be done. Finally, parameter identification and constrained optimization for the PDEs had to be newly analyzed and realized over the individual patient's geometry. Our new techniques had an impact on the specificity of the individual patients' treatment and on the construction of an improved hyperthermia applicator.
Particle methods have become indispensible in conformation dynamics to
compute transition rates in protein folding, binding processes and
molecular design, to mention a few.
Conformation dynamics requires at a decomposition of a molecule's position
space into metastable conformations.
In this paper, we show how this decomposition
can be obtained via the design of either ``soft'' or ``hard''
molecular conformations.
We show, that the soft approach results in a larger metastabilitiy of
the decomposition and is thus more advantegous. This is illustrated
by a simulation of Alanine Dipeptide.
Convergence Analysis of Smoothing Methods for Optimal Control of Stationary Variational Inequalities
(2011)
In the article an optimal control problem subject to a stationary variational inequality
is investigated. The optimal control problem is complemented with pointwise control constraints.
The convergence of a smoothing scheme is analyzed. There, the variational inequality
is replaced by a semilinear elliptic equation. It is shown that solutions of the regularized optimal
control problem converge to solutions of the original one. Passing to the limit in the
optimality system of the regularized problem allows to prove C-stationarity of local solutions of the original problem.
Moreover, convergence rates with respect to the regularization parameter for the error in the control are obtained.
These rates coincide with rates obtained by numerical experiments, which are included in the paper.
Our focus is on Maxwell's equations in the low frequency range; two specific applications we aim at are time-stepping schemes for eddy current computations and the stationary double-curl equation for time-harmonic fields. We assume that the computational domain is discretized by triangles or tetrahedrons; for the finite element approximation we choose N\'{e}d\'{e}lec's $H(curl)$-conforming edge elements of the lowest order. For the solution of the arising linear equation systems we devise an algebraic multigrid preconditioner based on a spatial component splitting of the field. Mesh coarsening takes place in an auxiliary subspace, which is constructed with the aid of a nodal vector basis. Within this subspace coarse grids are created by exploiting the matrix graphs. Additionally, we have to cope with the kernel of the $curl$-operator, which comprises a considerable part of the spectral modes on the grid. Fortunately, the kernel modes are accessible via a discrete Helmholtz decomposition of the fields; they are smoothed by additional algebraic multigrid cycles. Numerical experiments are included in order to assess the efficacy of the proposed algorithms.
We present an algebraic multigrid preconditioner which uses only the graphs of system matrices. Some elementary coarsening rules are stated, from which an advancing front algorithm for the selection of coarse grid nodes is derived. This technique can be applied to linear Lagrange-type finite element discretizations; for higher-order elements an extension of the multigrid algorithm is provided. Both two- and three-dimensional second order elliptic problems can be handled. Numerical experiments show that the resulting convergence acceleration is comparable to classical geometric multigrid.