Refine
Year of publication
Language
- English (1103) (remove)
Keywords
- optimal control (27)
- stability (14)
- integer programming (11)
- Stochastic programming (9)
- finite elements (9)
- mixed integer programming (9)
- Hamiltonian matrix (8)
- finite element method (8)
- model reduction (8)
- state constraints (8)
This paper presents three different adaptive algorithms for eigenvalue problems
associated with non-selfadjoint partial differential
operators. The basis for the developed algorithms is a homotopy method.
The homotopy method starts from a well-understood selfadjoint problem,
for which well-established adaptive methods are available.
Apart from the adaptive grid refinement, the progress of the homotopy as
well as the solution of the iterative method are adapted to balance the contributions
of the different error sources.
The first algorithm balances the homotopy, discretization and approximation errors with respect
to a fixed step-size $\tau$ in the homotopy.
The second algorithm combines the adaptive step-size control for the homotopy
with an adaptation in space that ensures an error below a fixed tolerance $\varepsilon$.
The third algorithm allows the complete adaptivity in space,
homotopy step-size as well as the iterative algebraic eigenvalue solver.
All three algorithms are compared in numerical examples.
Given a system of closed convex domains (inclusions) in n-dimensional Euclidean space, new computational meshes are introduced which partition the convex hull of the inclusion set into simple geometric objects. These partitions generalize the concept of Delaunay triangulations by interpreting the inclusions as generalized vertices while the remaining elements of the partition serve as connections between generalized vertices and therefore assume the classical role of edges, faces, etc. The proposed partitions are derived in two different ways: by exploiting duality with respect to certain generalized Voronoi partitions and by generalizing the well known Delaunay (empty circumcircle) criterion.
Generalized Delaunay partitions are of practical importance for the modeling of particle- and fiber-reinforced composite materials
since they enable an efficient conforming resolution of the highly complicated component geometries. The number of elements in the partitions is proportional to the number of inclusions which is minimal.
Functional Magnetic Resonance Imaging inherently involves noisy measurements and a severe multiple test
problem. Smoothing is usually used to reduce the effective number of multiple
comparisons and to locally integrate the signal and hence increase the
signal-to-noise ratio. Here, we provide a new structural adaptive segmentation
algorithm (AS)
that naturally combines the signal detection with noise reduction in one procedure.
Moreover, the new method
is closely related to a recently proposed structural adaptive smoothing
algorithm and preserves shape and spatial extent of activation areas without
blurring the borders.
MINLP Solver Software
(2010)
Motivated by an obstacle problem for a membrane
subject to cohesion forces, constrained minimization problems involving
a non-convex and non-differentiable objective functional
representing the total potential energy are considered. The associated
first order optimality system leads to a hemi-variational inequality,
which can also be interpreted as a special complementarity
problem in function space. Besides an analytical investigation
of first-order optimality, a primal-dual active set solver is introduced.
It is associated to a limit case of a semi-smooth Newton
method for a regularized version of the underlying problem class.
For the numerical algorithms studied in this paper, global as well as
local convergence properties are derived and verified numerically.
A duality based semismooth Newton framework for solving variational inequalities of the second kind
(2010)
In an appropriate function space setting, semismooth Newton methods are proposed
for iteratively computing the solution of a rather general class of variational inequalities (VIs) of the
second kind. The Newton scheme is based on the Fenchel dual of the original VI problem which
is regularized if necessary. In the latter case, consistency of the regularization with respect to the
original problem is studied. The application of the general framework to specific model problems
including Bingham flows, simplified friction, or total variation regularization in mathematical imaging
is described in detail. Finally, numerical experiments are presented in order to verify the theoretical
results.
In this paper a general form of the infinite-horizon linear quadratic control problem is considered. We will discuss quadratic cost functionals which involve not only the state and input-variables but also derivatives of the state and input-variables of arbitrary order under constraints given by linear systems of higher order. We will examine two results that relate the linear quadratic control problem to an optimality system, which is given through a para-Hermitian matrix polynomial. The results can be applied to general rectangular descriptor systems (see Subsection 6.1) to obtain results which so far were only known for quadratic descriptor systems. Also we will see that the notion of dissipativity (when introduced in the proper way) is equivalent to the solvability of the linear quadratic control problem.
The behavior approach and the problem of dissipativity have both been introduced and studied extensively by Willems et al. However, a computationally feasible method to check dissipativity is missing. Current methods will mostly rely on symbolic representations of rational functions. We will discuss a new characterization for linear systems in behavior form that allows to check dissipativity via the solution of a para-Hermitian, polynomial eigenvalue problem. Thus, we can employ standard methods of cubic complexity.
Under market frictions like illiquidity or transaction costs, contingent claims
can incorporate some inevitable intrinsic risk that cannot be completely hedged
away but remains with the holder. In general, they cannot be synthesized by
dynamical trading in liquid assets and hence not be priced by no-arbitrage arguments alone. Still, an agent can determine a valuation with respect to her
preferences towards risk. The utility indifference value for a variation in the
quantity of illiquid assets held by the agent is defined as the compensating variation
of wealth, under which her maximal expected utility remains unchanged.
Arrow Debreu Prices
(2010)
Arrow Debreu prices are the prices of ‘atomic’ time and state contingent
claims which deliver one unit of a specific consumption good if a specific uncertain
state realizes at a specific future date. For instance, claims on the good
‘ice cream tomorrow’ are split into different commodities depending whether the
weather will be good or bad, so that good-weather and bad-weather ice cream
tomorrow can be traded separately. Such claims were introduced by K.J. Arrow
and G. Debreu in their work on general equilibrium theory under uncertainty,
to allow agents to exchange state and time contingent claims on goods. Thereby
the general equilibrium problem with uncertainly can be reduced to a conventional
one without uncertainty. In finite state financial models, Arrow-Debreu
securities delivering one unit of the numeraire good can be viewed as natural
atomic building blocks for all other state-time contingent financial claims; their
prices determine a unique arbitrage-free price system.
We present and compare two different approaches to conditional
risk measures. One approach draws from vector space based convex analysis
and presents risk measures as functions on L^p spaces while the other approach
utilizes module based convex analysis where conditional risk measures are defined on L^p type modules. Both approaches utilize general duality theory for
vector valued convex functions in contrast to the current literature in which
we fi nd ad hoc dual representations. By presenting several applications such
as monotone and sub(cash) invariant hulls with corresponding examples we
illustrate that module based convex analysis is well suited to the concept of
conditional risk measures.
In this paper, we propose and investigate numerical methods based on QR factorization for computing all or some Lyapunov or Sacker-Sell spectral intervals for
linear differential-algebraic equations.
Furthermore, a perturbation and error analysis for these methods is presented. We
investigate how errors in the data and in the numerical integration affect the
accuracy of the approximate spectral intervals. Although we need to integrate
numerically some differential-algebraic systems on usually very long
time-intervals, under certain assumptions, it is shown that the error of the
computed spectral intervals can be controlled by the local error of numerical
integration and the error in solving the algebraic constraint.
Some numerical examples are presented to illustrate the theoretical results.
We estimate potential energy savings in IP-over-WDM networks achieved by switching off router line cards in low-demand hours. We compare three approaches to react on dynamics in the IP traffic over time, FUFL,
DUFL and DUDL. They provide different levels of freedom in adjusting the routing of lightpaths in the WDM layer and the routing of demands in the IP layer. Using MILP models based on realistic network topologies and node architectures as well as realistic demands, power, and cost values, we show that already a simple monitoring of the lightpath utilization in order to deactivate empty line cards (FUFL) brings substantial
benefits. The most significant savings, however, are achieved by rerouting traffic in the IP layer (DUFL), which allows emptying and deactivating lightpaths together with the corresponding line cards. A
sophisticated reoptimization of the virtual topologies and the routing in the optical domain for every demand scenario (DUDL) yields nearly no additional profits in the considered networks.
A new implicitly-restarted Krylov subspace method
for real symmetric/skew-symmetric generalized eigenvalue problems
is presented. The new method improves and generalizes the SHIRA method
to the case where the skew symmetric matrix is singular.
It computes a few eigenvalues and eigenvectors of the matrix pencil
close to a given target point. Several applications from control theory are
presented and the properties of the new method are illustrated by benchmark
examples.
In this paper, we investigate the interconversion processes of the major flame retardant - 1,2,5,6,9,10-hexabromocyclododecane (HBCD) - by the means of statistical thermodynamics based on classical force-fields. Three ideas will be presented. First, the application of classical hybrid Monte-Carlo simulations for quantum mechanical processes will be justified. Second, the problem of insufficient convergence properties of hybrid Monte-Carlo methods for the generation of low temperature canonical ensembles will be solved by an interpolation approach. Furthermore, it will be shown how free energy differences can be used for a rate matrix computation. The results of our numerical simulations will be compared to experimental results.
Keywords: Markov process, molecular dynamics, rate matrix
Given a general mixed integer program (MIP), we automatically detect block structures in the constraint matrix together with the coupling by capacity constraints arising from multi-commodity flow formulations. We
identify the underlying graph and generate cutting planes based on cuts in the detected network. Our implementation adds a separator to the branch-and-cut libraries of SCIP and CPLEX. We make use of the complemented mixed integer rounding framework (cMIR) but provide a special purpose aggregation heuristic that exploits the network structure. Our separation scheme speeds-up the computation for a large set of MIPs coming from network design problems
by a factor of two on average.
We study the perturbation theory of structured matrices under
structured rank one perturbations, and then focus on several classes of complex
matrices. Generic Jordan structures of perturbed matrices are identified.
It is shown that the perturbation
behavior of the Jordan structures
is substantially different from the corresponding theory for unstructured generic
rank one perturbations.
An one-dimensional Kohn-Sham system for spin particles is considered which effectively describes semiconductor nanostructures and which is investigated at zero temperature. We prove the existence of solutions and derive a priori estimates. For this purpose we find estimates for eigenvalues of the Schrödinger operator with effective Kohn-Sham potential and obtain $W^1,2$-bounds of the associated particle density operator. Afterwards, compactness and continuity results allow to apply Schauder's fixed point theorem. In case of vanishing exchange-correlation potential uniqueness is shown by monotonicity arguments. Finally, we investigate the behavior of the system if the temperature approaches zero.
We regard drift-diffusion equations for semiconductor devices in Lebesgue spaces. To that end we reformulate the (generalized) van Roosbroeck system as an evolution equation for the potentials to the driving forces of the currents of electrons and holes. This evolution equation falls into a class of quasi-linear parabolic systems which allow unique, local in time solution in certain Lebesgue spaces. In particular, it turns out that the divergence of the electron and hole current is an integrable function. Hence, Gauss' theorem applies, and gives the foundation for space discretization of the equations by means of finite volume schemes. Moreover, the strong differentiability of the electron and hole density in time is constitutive for the implicit time discretization scheme. Finite volume discretization of space, and implicit time discretization are accepted custom in engineering and scientific computing. ---This investigation puts special emphasis on non-smooth spatial domains, mixed boundary conditions, and heterogeneous material compositions, as required in electronic device simulation.
The goal of this paper is to show how the heat treatment of steel can be modelled in terms of a mathematical optimal
control problem. The approach is applied to laser surface hardening and the cooling of a steel slab including
mechanical effects. Finally, it is shown how the results can be utilized in industrial practice by a coupling with
machine-based control.
In this paper we investigate the performance of several out-of-the box solvers for mixed-integer quadratically constrained programmes (MIQCPs) on an open pit mine production scheduling problem with mixing constraints. We compare the solvers BARON, Couenne, SBB, and SCIP to a problem-specific algorithm on two different MIQCP formulations. The computational results presented show that general-purpose solvers with no particular knowledge of problem structure are able to nearly match the performance of a hand-crafted algorithm.
Stability-based methods for scenario generation in stochastic programming are reviewed. In particular, we briefly discuss Monte Carlo sampling, Quasi-Monte Carlo methods, quadrature rules based on sparse grids and optimal quantization. In addition, we provide some convergence results based on recent developments in multivariate integration. The method of optimal scenario reduction and techniques for scenario trees generation are also reviewed.
An adaptive finite element semi-smooth Newton solver for
the Cahn-Hilliard model with double obstacle free energy is proposed. For this purpose, the governing system is discretised in time using a semi-implicit scheme, and the resulting time-discrete system is formulated as an optimal control problem with pointwise constraints on the control. For the numerical solution of the optimal control problem, we propose a function space based algorithm which combines a Moreau-Yosida regularization technique for handling the control constraints with a semi-smooth-Newton method for solving the optimality systems of the resulting sub-problems. Further, for the discretization in space and in connection with the proposed algorithm, an adaptive finite element method is considered. The performance of the overall algorithm is illustrated by numerical experiments.
We study the dynamics of a ring of unidirectionally coupled autonomous
Duffing oscillators. Starting from a situation where the individual
oscillator without coupling has only trivial equilibrium dynamics,
the coupling induces complicated transitions to periodic, quasiperiodic,
chaotic, and hyperchaotic behavior. We study these transitions in
detail for small and large numbers of oscillators. Particular attention
is paid to the role of unstable periodic solutions for the appearance
of chaotic rotating waves, spatiotemporal structures and the Eckhaus
effect for a large number of oscillators. Our analytical and numerical
results are confirmed by a simple experiment based on the electronic
implementation of coupled Duffing oscillators.
Abstract—We present a novel coder for lossless compression
of adaptive multiresolution meshes that exploits their special
hierarchical structure. The heart of our method is a new
progressive connectivity coder that can be combined with
leading geometry encoding techniques. The compressor uses
the parent/child relationships inherent to the hierarchical mesh.
We use the rules that accord to the refinement scheme and
store bits only where it leaves freedom of choice, leading to
compact codes that are free of redundancy. To illustrate our
scheme we chose the widespread red-green refinement, but
the underlying concepts can be directly transferred to other
adaptive refinement schemes as well. The compression ratio of
our method exceeds that of state-of-the-art coders by a factor
of 2 to 3 on most of our benchmark models.
{We suggest hierarchical a posteriori error estimators for time-discretized
Allen-Cahn and Cahn-Hilliard equations with logarithmic potential and investigate
their robustness numerically.
We observe that the associated effectivity ratios seem to saturate for decreasing mesh size
and are almost independent of the temperature.
For selfadjoint matrices in an indefinite inner product, possible canonical forms are identified that arise when the matrix is subjected to a selfadjoint generic rank one perturbation. Genericity is understood in
the sense of algebraic geometry. Special attention is paid to the perturbation
behavior of the sign characteristic. Typically, under such a perturbation,
for every given eigenvalue, the largest Jordan block of the eigenvalue is
destroyed and (in case the eigenvalue is real) all other Jordan blocks
keep their sign characteristic. The new eigenvalues, i.e., those eigenvalues of
the perturbed matrix that are not eigenvalues of the original matrix,
are typically simple, and in some cases information is provided about their sign
characteristic (if the new eigenvalue is real). The main results are proved by using
the well known canonical forms of selfadjoint matrices in an indefinite inner product, a version of the Brunovsky
canonical form and on general results concerning rank one perturbations.
We study two related problems in non-preemptive scheduling and packing of malleable tasks with precedence constraints to minimize the makespan. We distinguish the scheduling variant, in which we allow the free choice of processors, and the packing variant, in which a task must be assigned to a contiguous subset of processors.
For precedence constraints of bounded width, we completely resolve the complexity status for any particular problem setting concerning width bound and number of processors, and give polynomial-time algorithms with best possible performance. For both, scheduling and packing malleable tasks, we present an FPTAS for the NP-hard problem variants and exact algorithms for all remaining special cases. To obtain the positive results, we do not require the common monotonous penalty assumption on processing times, whereas our hardness results hold even when assuming this restriction.
With the close relation between contiguous scheduling and strip packing, our FPTAS
is the first (and best possible) constant factor approximation for (malleable) strip packing under special precedence constraints.
This paper is intended to be a first step towards the continuous dependence of dynamical contact problems on the initial data as well as the uniqueness of a solution. Moreover, it provides the basis for a proof of the convergence of popular time integration schemes as the Newmark method.
We study a frictionless dynamical contact problem between both linearly elastic and viscoelastic bodies which is formulated via the Signorini contact conditions. For viscoelastic materials fulfilling the Kelvin-Voigt constitutive law, we find a characterization of the class of problems which satisfy a perturbation result in a non-trivial mix of norms in function space. This characterization is given in the form of a stability condition on the contact stresses at the contact boundaries.
Furthermore, we present perturbation results for two well-established approximations of the classical Signorini condition: The Signorini condition formulated in velocities and the model of normal compliance, both satisfying even a sharper version of our stability condition.
This paper deals with error estimates for space-time discretizations in the context of evolutionary variational inequalities of rate-independent type. After introducing a general abstract evolution problem, we address a fully-discrete approximation and provide a priori error estimates. The application of the abstract theory to a semilinear case is detailed. In particular, we provide explicit space-time convergence rates for the isothermal Souza-Auricchio model for shape-memory alloys.
This paper presents a combined adaptive finite element method with an iterative algebraic eigenvalue solver for the Laplace eigenvalue problem of quasi-optimal computational complexity. The analysis is based on a direct approach for eigenvalue problems and allows the use of higher order conforming finite element spaces with fixed polynomial degree k>0. The optimal adaptive finite element eigenvalue solver (AFEMES) involves a proper termination criterion for the algebraic eigenvalue solver and does not need any coarsening. Numerical evidence illustrates the optimal computational complexity.
On probabilistic constraints induced by rectangular sets and multivariate normal distributions
(2009)
In this paper, we consider optimization problems under probabilistic constraints which are defined by two-sided
inequalities for the underlying normally distributed random vector. As a main step
for an algorithmic solution of such problems, we derive a derivative formula for (normal) probabilities
of rectangles as functions of their lower or upper bounds. This formula allows to reduce the calculus
of such derivatives to the calculus of (normal) probabilities of rectangles themselves thus generalizing a
similar well-known statement for multivariate normal distribution functions. As an application, we consider
a problem from water reservoir management. One of the outcomes of the problem solution is that the
(still frequently encountered) use of simple individual probabilistic can completely fail. In contrast, the
(more difficult) use of joint probabilistic constraints which heavily depends on the derivative formula mentioned
before yields very reasonable and robust solutions over the whole time horizon considered.
Alternating matrix polynomials, that is, polynomials whose coefficients
alternate between symmetric and skew-symmetric matrices,
generalize the notions of even and odd scalar polynomials.
We investigate the Smith forms of alternating matrix polynomials,
showing that each invariant factor is an even or odd scalar polynomial.
Necessary and sufficient conditions
are derived for a given Smith form to be that of an alternating matrix polynomial.
These conditions allow a characterization of the possible Jordan structures
of alternating matrix polynomials,
and also lead to necessary and sufficient conditions
for the existence of structure-preserving strong linearizations.
Most of the results are applicable to singular as well
as regular matrix polynomials.
The paper proposes goal-oriented error estimation and mesh refinement
for optimal control problems with elliptic PDE constraints using the value
of the reduced cost functional as quantity of interest. Error representation,
hierarchical error estimators, and greedy-style error indicators are derived and
compared to their counterparts when using the all-at-once cost functional as
quantity of interest. Finally, the efficiency of the error estimator and generated
meshes are demonstrated on numerical examples.
We introduce geodesic finite elements as a new way to discretize
the nonlinear configuration space of a geometrically exact Cosserat rod.
These geodesic finite elements naturally generalize standard one-dimensional
finite elements to spaces of functions with values in a Riemannian manifold.
For the special orthogonal group, our approach reproduces the
interpolation formulas of [Crisfield/Jelenic:1999].
Geodesic finite elements are
conforming and lead to objective and path-independent problem formulations.
We introduce geodesic finite elements for general Riemannian manifolds,
discuss the relationship between geodesic finite elements and
coefficient vectors, and estimate the interpolation error.
Then we use them to find static equilibria of hyperelastic Cosserat rods.
Using the Riemannian trust-region algorithm of [Absil/Mahony/Sepulchre:2008]
we show numerically that the discretization error depends optimally on
the mesh size.
he generalized Langevin equation is useful for modeling a wide
range of physical processes. Unfortunately its parameters,
especially the memory function, are difficult to determine for
nontrivial processes. In this paper, relations between a
time-discrete generalized Langevin model and discrete multivariate
autoregressive (AR) or autoregressive moving average models (ARMA)
are established. This allows a wide range of discrete linear
methods known from time series analysis to be applied. In
particular, the determination of the memory function {\it via} the
order of the respective AR or ARMA model is addressed. The method
is illustrated on a one-dimensional test system and subsequently
applied to the molecular dynamics of a biomolecule which exhibits
an interesting relationship between the solvent method used, the
molecular conformation and the depth of the memory.
e propose an algorithm for the fast and efficient simulation of polymers represented by
chains of hard spheres. The particles are linked by holonomic bond constraints.
While the motion of the polymers is free (i.e., no collisions occur) the equations
of motion can be easily integrated using a collocation-based partitioned Gauss-Runge-Kutta method.
The method is reversible, symplectic and preserves energy. Moreover the numerical scheme allows the integration using much longer time steps than any explicit integrator such as the popular Verlet method. If polymers collide the point of impact can be determined to arbitrary precision by simple nested intervals. Once the collision point is known the impulsive contribution can be computed analytically. We illustrate our approach by means of a suitable numerical example.
We study Balanced Truncation for stochastic differential equations. In doing so, we adopt ideas from large deviations theory and discuss notions of controllability and obervability for dissipative Hamiltonian systems with degenerate noise term, also known as Langevin equations. For partially-observed Langevin equations, we illustrate model reduction by balanced truncation with an example from molecular dynamics and discuss aspects of structure-preservation.
or stable linear input-output systems, the method of balanced truncation (B.C.~Moore, {\em IEEE Trans. Auto. Contr.} {\bf AC-26}, 17--32, 1981) consists in finding a coordinate transformation such that modes which are least sensitive to the external input (controllability) also give the least output (observability) and therefore can be neglected. A drawback is that projecting the original equations of motion onto the subspace of interest typically fails to preserve the problem's physical structure, e.g., if the original equations are of second-order form or Hamiltonian. For Hamiltonian systems, a natural way of restricting a system to a subspace is by means of constraints, and we show, employing singular perturbation arguments, that balanced truncation can be done in a structure-preserving fashion. The thus obtained reduced Hamiltonian system preserves stability and passivity and satisfies the usual balanced truncation error bound.
We present a formal procedure for structure-preserving model reduction of linear second-order control problems that appear in a variety of physical contexts, e.g., vibromechanical systems or electrical circuit design. Typical balanced truncation methods that project onto the subspace of the largest Hankel singular values fail to preserve the problem's physical structure and may suffer from lack of stability. In this paper we adopt the framework of generalized Hamiltonian systems that covers the class of relevant problems and that allows for a generalization of balanced truncation to second-order problems.
It turns out that the Hamiltonian structure, stability and passivity are preserved if the truncation is done by imposing a holonomic constraint on the system rather than standard Galerkin projection.
The Steiner connectivity problem is a generalization of
the Steiner tree problem. It consists in finding a minimum cost set of
simple paths to connect a subset of nodes in an undirected graph.
We show that important polyhedral and algorithmic results on the
Steiner tree problem carry over to the Steiner connectivity problem,
namely, the Steiner cut and the Steiner partition inequalities, as
well as the associated polynomial time separation algorithms, can be
generalized. Similar to the Steiner tree case, a certain directed
formulation, which is stronger than the natural undirected one,
plays a central role.
For many fundamental cooperative cost sharing games, especially when costs are supermodular, it is known that Moulin mechanisms inevitably suffer from poor budget balance factors. Mehta, Roughgarden, and Sundararajan recently introduced acyclic mechanisms, which achieve a slightly weaker notion of group-strategyproofness, but leave more flexibility to improve upon the approximation guarantees with respect to budget balance and social cost.
In this paper, we provide a very simple but powerful method for turning any rho-approximation algorithm for a combinatorial optimization problem into a rho-budget balanced acyclic mechanism. Hence, we show that there is no gap between the best possible approximation guarantees of full-knowledge approximation algorithms and weakly group-strategyproof cost sharing mechanisms.
The applicability of our method is demonstrated by deriving mechanisms for scheduling and network design problems which beat the best possible budget balance factors of Moulin mechanisms. By elaborating our framework, we provide means to construct weakly group-strategyproof mechanisms with approximate social cost. The mechanisms we develop for completion time scheduling problems perform surprisingly well by achieving the first constant budget balance and social cost factors.
When managing energy or weather related risk often only imperfect hedging instruments are available. In the first part we illustrate problems arising with imperfect hedging by studying a toy model. We consider an airline’s problem with covering income risk due to fluctuating kerosene prices by investing into futures written on heating oil with closely correlated price dynamics. In the second part we outline recent results on exponential utility based cross hedging concepts. They highlight in a generalization of the Black-Scholes delta hedge formula to incomplete markets. Its derivation is based on a purely stochastic approach of utility maximization. It interprets stochastic control problems in the BSDE language, and profits from the power of the stochastic calculus of variations.
We consider backward stochastic differential equations with drivers of quadratic growth (qgBSDE). We prove several statements concerning path regularity and stochastic smoothness of the solution processes of the qgBSDE, in particular we prove an extension of Zhang's path regularity theorem to the quadratic growth setting. We give explicit convergence rates for the difference between the solution of a qgBSDE and its truncation, filling an important gap in numerics for qgBSDE. We give an alternative proof of second order Malliavin differentiability for BSDE with drivers that are Lipschitz continuous (and differentiable), and then derive the same result for qgBSDE.
We show that the spectrum of linear delay differential equations with
large delay splits into two different parts. One part, called the
strong spectrum, converges to isolated points when the delay parameter
tends to infinity. The other part, called the pseudocontinuous spectrum,
accumulates near criticality and converges after rescaling to a set
of spectral curves, called the asymptotic continuous spectrum. We
show that the spectral curves and strong spectral points provide a
complete description of the spectrum for sufficiently large delay
and can be comparatively easily calculated by approximating expressions.
Local existence, uniqueness and smooth dependence for nonsmooth quasilinear parabolic problems
(2009)
We prove local existence, uniqueness, Hölder regularity in space and time, and smooth dependence in Hölder spaces for a general class of quasilinear parabolic initial boundary value problems with nonsmooth data.
As a result the gap between low smoothness of the data, which is typical for
many applications, and high smoothness of the solutions, which is necessary
for the applicability of differential calculus to abstract formulations of the
initial boundary value problems, has been closed. The theory works for any
space dimension, and the nonlinearities are allowed to be nonlocal and to
have any growth. The main tools are new maximal regularity results [19, 20]
in Sobolev–Morrey spaces for linear parabolic initial boundary value problems
with nonsmooth data, linearization techniques and the Implicit Function Theorem.
In this paper we study a certain cardinality constrained packing integer program which is motivated by the problem of dimensioning a cut in a two-layer network. We prove NP-hardness and consider the facial structure of the corresponding polytope. We provide a complete description for the smallest nontrivial case and develop two general classes of facet-defining inequalities. This approach extends the
notion of the well known cutset inequalities to two network layers.
In this paper, we present a model-based optimization approach for the design of multi-layer networks. The proposed framework is based on a series of increasingly abstract models – from a general technical system model to a problem specific mathematical model – which are used in a planning cycle to optimize the multi-layer networks. In a case study we show how central design questions for an IP-over-WDM network architecture can be answered
using this approach. Based on reference networks from the German research project EIBONE, we investigate the influence of various planning parameters on the total design cost. This includes a comparison of point-to-point vs. transparent optical layer architectures, different traffic distributions, and the use of PoS vs. Ethernet interfaces.
We study superhedging of contingent claims with physical delivery in a discrete-time market model with convex transaction costs. Our model extends Kabanov's currency market model by allowing for nonlinear illiquidity effects. We show that an appropriate generalization of Schachermayer's robust no arbitrage condition implies that the set of claims hedgeable with zero cost is closed in probability. Combined with classical techniques of convex analysis, the closedness yields a dual characterization of premium processes that are sufficient to superhedge a given claim process. We also extend the fundamental theorem of asset pricing for general conical models.