Refine
Year of publication
Language
- English (1103) (remove)
Keywords
- optimal control (27)
- stability (14)
- integer programming (11)
- Stochastic programming (9)
- finite elements (9)
- mixed integer programming (9)
- Hamiltonian matrix (8)
- finite element method (8)
- model reduction (8)
- state constraints (8)
Based on a thermodynamically consistent model for precipitation in gallium arsenide crystals including surface tension and bulk stresses by Dreyer and Duderstadt, we propose different mathematical models to describe the size evolution of liquid droplets in a crystalline solid. The first class of models treats the diffusion-controlled regime of interface motion, while the second class is concerned with the interface-controlled regime of interface motion. Our models take care of conservation of mass and substance. We consider homogenised models, where different length scales of the experimental situation have been exploited in order to simplify the equations. These homogenised models generalise the well-known Lifshitz-Slyozov-Wagner model for Ostwald ripening. Mean field models capture the main properties of our system and are well adapted for numerics and further analysis. Numerical evidence suggests in which case which one of the two regimes might be appropriate to the experimental situation.
We introduce an electronic model for solar cells including energy resolved defect
densities. The resulting drift-diffusion model corresponds to a generalized
van Roosbroeck system with additional source terms coupled with ODEs containing space and
energy as parameters for all defect densities. The system has to be considered in
heterostructures and with mixed boundary conditions from device simulation.
We give a weak formulation of the problem. If the boundary data and the sources
are compatible with thermodynamic equilibrium the free energy along solutions
decays monotonously. In other cases it may be increasing, but we estimate its growth.
We establish boundedness and uniqueness results and prove the existence of a
weak solution. This is done by considering a regularized problem, showing its
solvability and the boundedness of its solutions independent of the regularization level.
We prove global convergence of an inexact polyhedral Gau\ss--Seidel method for the minimization of strictly convex functionals that are continuously differentiable on each polyhedron of a polyhedral decomposition of
their domains of definition. While being known to be very slow by themselves, such methods are a cornerstone for fast, globally convergent multigrid methods. Our result generalizes the proof of Kornhuber and Krause [2006] for differentiable functionals on the Gibbs simplex. Example applications are given that require the generality of our approach.
Supporting Global Numerical Optimization of Rational Functions by Generic Symbolic Convexity Tests
(2010)
Convexity is an important property in nonlinear optimization since it allows to apply efficient local methods for finding global solutions. We propose to apply symbolic methods to prove or disprove convexity of rational functions over a polyhedral domain. Our algorithms reduce convexity questions to real quantifier elimination problems. Our methods are implemented and publicly available in the open source computer algebra system REDUCE. Our long term goal is to integrate REDUCE as a ``workhorse'' for symbolic computations into a numerical solver.
The aim of this paper is to devise an adaptive timestep control in the contact--stabilized Newmark method (CONTACX) for dynamical contact problems between two viscoelastic bodies in the framework of Signorini's condition. In order to construct a comparative scheme of higher order accuracy, we extend extrapolation techniques. This approach demands a subtle theoretical investigation of an asymptotic error expansion of the contact--stabilized Newmark scheme. On the basis of theoretical insight and numerical observations, we suggest an error estimator and a timestep selection which also cover the presence of contact. Finally, we give a numerical example.
We propose a robust and efficient numerical discretization scheme for the infinitesimal generator of a diffusion process based on a finite volume approximation. The resulting discrete-space operator can be interpreted as a jump process on the mesh whose invariant measure is precisely the cell approximation of the Boltzmann distribution of the original process. Moreover the resulting jump process preserves the detailed balance property of the original stochastic process.
We revisit the problem of the linear response of a constrained mechanical system. In doing so we show that the standard expressions of Green and Kubo carry over to the constrained case without any alteration. The argument is based on the appropriate definition of constrained expectations by means of which Liouville’s theorem and the Green-Kubo relations naturally follow.
We propose a nonequilibrium sampling method for computing free energy profiles along a given reaction coordinate. The method consists of two parts: a controlled Langevin sampler that generates nonequilibrium bridge paths conditioned by the reaction coordinate, and Jarzynski’s formula for reweighting the paths. Our derivation of the equa- tions of motion of the sampler is based on stochastic perturbation of a controlled dissipative Hamiltonian system, for which we prove Jarzynski’s identity as a special case of the Feynman-Kac formula. We illustrate our method by means of a suitable numerical example and briefly discuss issues of optimally choosing the control protocol for the reaction coordinate.
We study balanced model reduction of partially-observed linear stochastic differential equa- tions of Langevin type. Balancing the equations of motion gives rise to a singularly perturbed system of equations with slow and fast degrees of freedom, and we prove that in the limit of the fast variables becoming infinitely fast, the solutions converge to the solution of a reduced-order Langevin equation. We illustrate the method with several numerical examples and discuss the relation to model reduction of deterministic control systems that have an underlying Hamiltonian structure.
We study balanced model reduction for stable bilinear systems in the limit of partly vanishing Hankel singular values. We show that the dynamics admit a splitting into fast and slow subspaces and prove an averaging principle for the slow dynamics. We illustrate our method with an example from stochastic control (density evolution of a dragged Brownian particle) and discuss issues of structure preservation and positivity.
This paper presents three different adaptive algorithms for eigenvalue problems
associated with non-selfadjoint partial differential
operators. The basis for the developed algorithms is a homotopy method.
The homotopy method starts from a well-understood selfadjoint problem,
for which well-established adaptive methods are available.
Apart from the adaptive grid refinement, the progress of the homotopy as
well as the solution of the iterative method are adapted to balance the contributions
of the different error sources.
The first algorithm balances the homotopy, discretization and approximation errors with respect
to a fixed step-size $\tau$ in the homotopy.
The second algorithm combines the adaptive step-size control for the homotopy
with an adaptation in space that ensures an error below a fixed tolerance $\varepsilon$.
The third algorithm allows the complete adaptivity in space,
homotopy step-size as well as the iterative algebraic eigenvalue solver.
All three algorithms are compared in numerical examples.
Given a system of closed convex domains (inclusions) in n-dimensional Euclidean space, new computational meshes are introduced which partition the convex hull of the inclusion set into simple geometric objects. These partitions generalize the concept of Delaunay triangulations by interpreting the inclusions as generalized vertices while the remaining elements of the partition serve as connections between generalized vertices and therefore assume the classical role of edges, faces, etc. The proposed partitions are derived in two different ways: by exploiting duality with respect to certain generalized Voronoi partitions and by generalizing the well known Delaunay (empty circumcircle) criterion.
Generalized Delaunay partitions are of practical importance for the modeling of particle- and fiber-reinforced composite materials
since they enable an efficient conforming resolution of the highly complicated component geometries. The number of elements in the partitions is proportional to the number of inclusions which is minimal.
Functional Magnetic Resonance Imaging inherently involves noisy measurements and a severe multiple test
problem. Smoothing is usually used to reduce the effective number of multiple
comparisons and to locally integrate the signal and hence increase the
signal-to-noise ratio. Here, we provide a new structural adaptive segmentation
algorithm (AS)
that naturally combines the signal detection with noise reduction in one procedure.
Moreover, the new method
is closely related to a recently proposed structural adaptive smoothing
algorithm and preserves shape and spatial extent of activation areas without
blurring the borders.
MINLP Solver Software
(2010)
Motivated by an obstacle problem for a membrane
subject to cohesion forces, constrained minimization problems involving
a non-convex and non-differentiable objective functional
representing the total potential energy are considered. The associated
first order optimality system leads to a hemi-variational inequality,
which can also be interpreted as a special complementarity
problem in function space. Besides an analytical investigation
of first-order optimality, a primal-dual active set solver is introduced.
It is associated to a limit case of a semi-smooth Newton
method for a regularized version of the underlying problem class.
For the numerical algorithms studied in this paper, global as well as
local convergence properties are derived and verified numerically.
A duality based semismooth Newton framework for solving variational inequalities of the second kind
(2010)
In an appropriate function space setting, semismooth Newton methods are proposed
for iteratively computing the solution of a rather general class of variational inequalities (VIs) of the
second kind. The Newton scheme is based on the Fenchel dual of the original VI problem which
is regularized if necessary. In the latter case, consistency of the regularization with respect to the
original problem is studied. The application of the general framework to specific model problems
including Bingham flows, simplified friction, or total variation regularization in mathematical imaging
is described in detail. Finally, numerical experiments are presented in order to verify the theoretical
results.
In this paper a general form of the infinite-horizon linear quadratic control problem is considered. We will discuss quadratic cost functionals which involve not only the state and input-variables but also derivatives of the state and input-variables of arbitrary order under constraints given by linear systems of higher order. We will examine two results that relate the linear quadratic control problem to an optimality system, which is given through a para-Hermitian matrix polynomial. The results can be applied to general rectangular descriptor systems (see Subsection 6.1) to obtain results which so far were only known for quadratic descriptor systems. Also we will see that the notion of dissipativity (when introduced in the proper way) is equivalent to the solvability of the linear quadratic control problem.
The behavior approach and the problem of dissipativity have both been introduced and studied extensively by Willems et al. However, a computationally feasible method to check dissipativity is missing. Current methods will mostly rely on symbolic representations of rational functions. We will discuss a new characterization for linear systems in behavior form that allows to check dissipativity via the solution of a para-Hermitian, polynomial eigenvalue problem. Thus, we can employ standard methods of cubic complexity.
Under market frictions like illiquidity or transaction costs, contingent claims
can incorporate some inevitable intrinsic risk that cannot be completely hedged
away but remains with the holder. In general, they cannot be synthesized by
dynamical trading in liquid assets and hence not be priced by no-arbitrage arguments alone. Still, an agent can determine a valuation with respect to her
preferences towards risk. The utility indifference value for a variation in the
quantity of illiquid assets held by the agent is defined as the compensating variation
of wealth, under which her maximal expected utility remains unchanged.
Arrow Debreu Prices
(2010)
Arrow Debreu prices are the prices of ‘atomic’ time and state contingent
claims which deliver one unit of a specific consumption good if a specific uncertain
state realizes at a specific future date. For instance, claims on the good
‘ice cream tomorrow’ are split into different commodities depending whether the
weather will be good or bad, so that good-weather and bad-weather ice cream
tomorrow can be traded separately. Such claims were introduced by K.J. Arrow
and G. Debreu in their work on general equilibrium theory under uncertainty,
to allow agents to exchange state and time contingent claims on goods. Thereby
the general equilibrium problem with uncertainly can be reduced to a conventional
one without uncertainty. In finite state financial models, Arrow-Debreu
securities delivering one unit of the numeraire good can be viewed as natural
atomic building blocks for all other state-time contingent financial claims; their
prices determine a unique arbitrage-free price system.