Refine
Year of publication
Language
- English (1103)
- German (10)
- Multiple languages (1)
Keywords
- optimal control (27)
- stability (14)
- integer programming (11)
- Stochastic programming (9)
- finite elements (9)
- mixed integer programming (9)
- Hamiltonian matrix (8)
- finite element method (8)
- model reduction (8)
- state constraints (8)
Chimera states are particular trajectories
in systems of phase oscillators with non-local coupling
that display a spatio-temporal pattern of coherent and incoherent motion.
We present here a detailed analysis
of the spectral properties for such trajectories.
First, we study numerically their Lyapunov spectrum
and its behavior for an increasing number of oscillators.
The spectra demonstrate the hyperchaotic nature of the chimera states
and show a correspondence of the Lyapunov dimension
with the number of incoherent oscillators.
Then, we pass to the thermodynamic limit equation
and present an analytic approach
to the spectrum of a corresponding linearized evolution operator.
We show that in this setting, the chimera state is neutrally stable
and that the continuous spectrum coincides with the limit
of the hyperchaotic Lyapunov spectrum obtained for the finite size systems.
Lyapunov and exponential dichotomy spectral theory is extended
from ordinary differential equations (ODEs) to nonautonomous
differential-algebraic equations (DAEs). By using orthogonal
changes of variables, the original DAE system is transformed into
appropriate condensed forms, for which concepts such as Lyapunov
exponents, Bohl exponents, exponential dichotomy and spectral
intervals of various kinds can be analyzed via the resulting
underlying ODE. Some essential differences between the spectral
theory for ODEs and that for DAEs are pointed out. Numerical
methods for computing the spectral intervals associated with
Lyapunov and Sacker-Sell (exponential dichotomy) spectra are
derived by modifying and extending those methods proposed for ODEs. Perturbation theory and error analysis are discussed, as
well. Finally, some numerical examples are presented to illustrate
the theoretical results and the properties of the numerical
methods.
This article deals with the spectra of Laplacians of weighted graphs. In this context, two objects are of fundamental importance for the dynamics of complex networks: the second eigenvalue of such a spectrum (called algebraic connectivity) and its associated eigenvector, the so-called Fiedler vector. Here we prove that, given a Laplacian matrix, it is possible to perturb the weights of the existing edges in the underlying graph in order to obtain simple eigenvalues and a Fiedler vector composed of only non-zero entries. These structural genericity properties with the constraint of not adding edges in the underlying
graph are stronger than the classical ones, for which arbitrary structural perturbations are allowed. These results open the opportunity to understand the impact of structural changes on the dynamics of complex systems.
Polzehl and Spokoiny (2000) introduced the adaptive weights smoothing
(AWS) procedure in the context of image denoising. The procedure
has some remarkable properties like preservation of edges and contrast,
and (in some sense) optimal reduction of noise. The procedure is fully
adaptive and dimension free. Simulations with artificial images show
that AWS is superior to classical smoothing techniques especially when
the underlying image function is discontinuous and can be well approximated
by a piecewise constant function. However, the latter assumption
can be rather restrictive for a number of potential applications. Here we
present a new method based on the ideas of propagation and separation
which extends the AWS procedure to the case of an arbitrary local linear
parametric structure. We also establish some important results about
properties of the new ‘propagation-separation’ procedure including rate
optimality in the pointwise and global sense. The performance of the
procedure is illustrated by examples for local polynomial regression and
by applications to artificial and real images.
Three properties of matrices: the spark, the mutual incoherence and the restricted isometry property have recently been introduced in the context of compressed sensing. We study these properties for matrices that are Kronecker products and show how these properties relate to those of the factors. For the mutual incoherence we also
discuss results for sums of Kronecker products.
A new concept is introduced for the adaptive finite element discretization of partial differential equations that have a sparsely
representable solution. Motivated by recent work on compressed sensing, a recursive mesh refinement procedure is presented that uses linear programming to find a good approximation to the sparse solution on a given refinement level. Then only those parts of the mesh are refined that belong to nonzero expansion coefficients. Error estimates for this procedure are refined and the behavior of the procedure is demonstrated via some simple elliptic model problems.
A state-constrained optimal boundary control problem governed by a linear elliptic equation is considered. In order to obtain the optimality conditions for the solutions to the model problem, a Slater assumption has to be made that restricts the theory to the two-dimensional case. This difficulty is overcome by a source representation of the control and combined with a Lavrentiev type regularization. Optimality conditions for the regularized problem are derived, where the corresponding Lagrange multipliers have $L^2$-regularity. By the spectral theorem for compact and normal operators, the convergence result is shown. Moreover, the convergence for vanishing regularization parameter of the adjoint state associated with the regularized problem is shown. Finally, the uniform boundedness of the regularized Lagrange multipliers in $L^1(\O)$ is verified by a maximum principle argument.
The purpose of this paper is the analysis of dynamic iteration methods for
the numerical integration of coupled systems of ODEs and DAEs.
We will investigate convergence of these methods and put special emphasis
on the {\sc Jacobi}- and {\sc Gauss-Seidel} methods. Furthermore, the
fundamental difference in the convergence behaviour of coupled ODEs and DAEs
is pointed out. This difference is used to explain why certain relaxation methods
for coupled DAEs may fail. Finally, a remedy to this undesirable
effect is proposed that makes use of a so-called {\em preconditioned dynamic
iteration} strategy. This regularization also allows significant reduction of
dynamic iteration steps.
Some mathematical problems related to the 2nd order optimal shape of a crystallization interface
(2012)
We consider the problem to optimize the stationary temperature distribution and the equilibrium shape of the solid-liquid interface in a two-phase system subject to a temperature gradient. The interface satisfies the minimization principle of the free energy, while the temperature is solving the heat equation with a radiation boundary conditions at the outer wall. Under the condition that the temperature gradient is uniformly negative in the direction of crystallization, the interface is expected to have a global graph representation. We reformulate this condition as a pointwise constraint on the gradient of the state, and we derive the first order optimality system for a class of objective functionals that account for the second surface derivatives, and for the surface temperature gradient.
Some aspects of reachability for parabolic boundary control problems with control constraints
(2009)
A class of one-dimensional parabolic optimal boundary control problems
is considered. The discussion includes Neumann, Robin, and Dirichlet
boundary conditions. The reachability of a given target state in final
time is discussed under box constraints on the control. As a mathematical
tool, related exponential moment problems are investigated. Moreover,
based on a detailed study of the adjoint state, a technique is presented
to find the location and the number of the switching points of optimal
bang-bang controls. Numerical examples illustrate this procedure.
Solving Time-Dependent Optimal Control Problems in Comsol Multiphyiscs ba Space-Time Discretizations
(2009)
We use COMSOL Multiphysics to solve time-dependent optimal control problems for par-
tial differential equations whose optimality conditions can be formulated as a PDE. For a
class of linear-quadratic model problems we summarize known analytic results on existence
of solutions and first order optimality conditions that exhibit the typical feature of time-dependent control problems, namely the fact that a part of the optimality system has to be
integrated backward in time. We present a strategy that is based on the treatment of the
coupled optimality system in the space-time cylinder. A brief motivation of this approach is
given by showing that the optimality system is elliptic in some sence. Numerical examples
show advantages and limits of the usage of COMSOL Multiphysics and of our approach.
The stochastic dynamics of a well-stirred mixture of molecular species
interacting through different biochemical reactions can be
accurately modelled by the chemical master equation (CME). Research in
the biology and scientific computing community has
concentrated mostly on the development of numerical techniques to
approximate the solution of the CME via many realizations of the associated
Markov jump process. The domain of exact and/or efficient methods for
directly solving the CME is still widely open, which is due to its
large dimension that grows exponentially with the number of molecular
species involved. In this article, we present an exact solution
formula of the CME for arbitrary initial conditions in the case where
the underlying system is governed by monomolecular reactions. The
solution can be expressed in terms of the convolution of multinomial
and product Poisson distributions with time-dependent parameters
evolving according to the traditional reaction-rate equations. This
very structured representation allows to deduce any property of the
solution. The model class includes many interesting examples and may
also be used as the starting point for the design of new numerical
methods for the CME of more complex reaction systems.
Pseudo-Boolean problems generalize SAT problems by allowing linear constraints and a linear objective function. Different solvers, mainly having their roots in the SAT domain, have been proposed and compared,for instance, in Pseudo-Boolean evaluations. One can also formulate Pseudo-Boolean models as integer programming models. That is,Pseudo-Boolean problems lie on the border between the SAT domain and the integer programming field.
In this paper, we approach Pseudo-Boolean problems from the integer programming side. We introduce the framework SCIP that implements constraint integer programming techniques. It integrates methods from constraint programming, integer programming, and SAT-solving: the solution of linear programming relaxations, propagation of linear as well as nonlinear constraints, and conflict analysis. We argue that this approach is suitable for Pseudo-Boolean instances containing general linear constraints, while it is less efficient for pure SAT problems. We present extensive computational experiments on the test set used for the Pseudo-Boolean evaluation 2007. We show that our approach is very efficient for optimization instances and competitive for feasibility problems. For the nonlinear parts, we also investigate the influence of linear programming relaxations and propagation methods on the performance. It turns out that both techniques are helpful for obtaining an efficient solution method.
This paper presents concepts and implementation of the finite element toolbox Kaskade 7, a flexible C++ code for solving elliptic and parabolic PDE systems. Issues such as problem formulation, assembly and adaptivity are discussed at the example of optimal control problems. Trajectory compression for parabolic optimization problems is considered as a case study.
We discuss solvers for Sylvester, Lyapunov, and Stein equations
that are available in the SLICOT Library (Subroutine
Library In COntrol Theory). These solvers offer improved
efficiency, reliability, and functionality compared to corresponding
solvers in other computer-aided control system design
packages. The performance of the SLICOT solvers is
compared with the corresponding MATLAB solvers.
In this paper we study BSDEs arising from a special class of backward stochastic partial differential equations (BSPDEs) that is intimately related to utility maximization problems with respect to arbitrary utility functions. After providing existence and uniqueness we discuss the numerical realizability. Then we study utility maximization problems on incomplete financial markets whose dynamics are governed by continuous semimartingales. Adapting standard methods that solve the utility maximization problem using BSDEs, we give solutions for the portfolio optimization problem which involve the delivery of a liability at maturity. We illustrate our study by numerical simulations for selected examples. As a byproduct we prove existence of a solution to a very particular quadratic growth BSDE with unbounded terminal condition. This complements results on this topic obtained in [6,7,8].
In the case of the equidistant discretization of the Airy differential equation (\discrete
Airy equation") the exact solution can be found explicitly. This fact is used
to derive a discrete transparent boundary condition (TBC) for a Schroedinger
equation with linear varying potential, which can be used in \parabolic equation"
simulations in (underwater) acoustics and for radar propagation in the troposphere.
We propose different strategies for the discrete TBC and show an efficient implementation.
Finally a stability proof for the resulting scheme is given. A numerical
example in the application to underwater acoustics shows the superiority of the new
discrete TBC.
Propagation of short optical pulses in a nonlinear dispersive medium is considered without the use of slow envelope and
unidirectional propagation approximations. The existence of uniformly moving solitary solutions is predicted in the anomalous
dispersion domain. A four-parametric family of such solutions is found that contains the classical envelope soliton in the limit of
large pulse durations. In the opposite limit we get another family member, which in contrast to the envelope soliton strongly depends on nonlinearity model and represents the shortest and the most intense pulse which can propagate in a stationary manner.
Particle methods have become indispensible in conformation dynamics to compute transition rates in protein folding, binding processes and molecular design, to mention a few. Conformation dynamics requires at a decomposition of a molecule's position space into metastable conformations. In this paper, we show how this decomposition can be obtained via the design of either ``soft'' or ``hard'' molecular conformations. We show, that the soft approach results in a larger metastabilitiy of the decomposition and is thus more advantegous. This is illustrated by a simulation of Alanine Dipeptide.
Sobolev stability of plane wave solutions to the cubic nonlinear Schrödinger equation on a torus
(2013)
It is shown that plane wave solutions to the cubic nonlinear Schrödinger equation on a torus behave orbitally stable under generic perturbations of the initial data that are small in a high-order Sobolev norm, over long times that extend to arbitrary negative powers of the smallness parameter. The perturbation stays small in the same Sobolev norm over such long times. The proof uses a Hamiltonian reduction and transformation and, alternatively, Birkhoff normal forms or modulated Fourier expansions in time.
We give an exposition of recent results on regularity and Fredholm properties for first-order one-dimensional hyperbolic PDEs. We show that large classes of boundary operators cause an effect that smoothness increases with time. This property is the key in finding regularizers
(parametrices) for hyperbolic problems. We construct regularizers for periodic problems for dissipative first-order linear hyperbolic PDEs and show that these problems are modeled by Fredholm operators of index zero.
In this paper we introduce the notion of smoothed competitive analysis of online
algorithms. Smoothed analysis has been proposed by Spielman and Teng [22] to explain
the behaviour of algorithms that work well in practice while performing very poorly
from a worst case analysis point of view. We apply this notion to analyze the Multi-
Level Feedback (MLF) algorithm to minimize the total flow time on a sequence of
jobs released over time when the processing time of a job is only known at time of
completion.
The initial processing times are integers in the range [1, 2K ]. We use a partial bit
randomization model, where the initial processing times are smoothened by changing
the k least significant bits under a quite general class of probability distributions. We
show that MLF admits a smoothed competitive ratio of O(max((2k /σ)3 , (2k /σ)2 2K−k )),
where σ denotes the standard deviation of the distribution. In particular, we obtain a
competitive ratio of O(2K−k ) if σ = Θ(2k ). We also prove an Ω(2K−k ) lower bound for
any deterministic algorithm that is run on processing times smoothened according to
the partial bit randomization model. For various other smoothening models, including
the additive symmetric smoothening model used by Spielman and Teng [22], we give a
higher lower bound of Ω(2K ).
A direct consequence of our result is also the first average case analysis of MLF. We
show a constant expected ratio of the total flow time of MLF to the optimum under
several distributions including the uniform distribution.
Many applications give rise to matrix polynomials whose coefficients have
a kind of reversal symmetry, a structure we call palindromic.
Several properties of scalar palindromic polynomials are derived,
and together with properties of compound matrices, used to
establish the Smith form of regular and singular T-palindromic matrix polynomials,
over arbitrary fields.
The invariant polynomials are shown to
inherit palindromicity,
and their structure is described in detail.
Jordan structures of palindromic matrix polynomials are characterized,
and necessary conditions for the
existence of structured linearizations established.
In the odd degree case, a constructive procedure for building
palindromic linearizations shows that the necessary conditions are sufficient as well.
The Smith form for *-palindromic polynomials is also analyzed. Finally, results for palindromic matrix polynomials over fields of
characteristic two are presented.
Using Freidlin-Wentzell sample path large deviations theory, we characterise the small-time behaviour of probabilities of a process following an uncorrelated local-stochastic volatility model.
As a corollary, we determine the small-maturity behaviour of the implied volatility under this class of processes.
We characterize the Smith form of skew-symmetric matrix polynomials
over an arbitrary field $\F$,
showing that all elementary divisors occur with even multiplicity.
Restricting the class of equivalence transformations to unimodular congruences,
a Smith-like skew-symmetric canonical form
for skew-symmetric matrix polynomials is also obtained.
These results are used to analyze the eigenvalue and elementary divisor structure
of matrices expressible as products of two skew-symmetric matrices,
as well as the existence of structured linearizations
for skew-symmetric matrix polynomials.
By contrast with other classes of structured matrix polynomials
(e.g., alternating or palindromic polynomials),
every regular skew-symmetric matrix polynomial
is shown to have a structured strong linearization.
While there are singular skew-symmetric polynomials of even degree
for which a structured linearization is impossible,
for each odd degree we develop a skew-symmetric companion form
that uniformly provides a structured linearization
for every regular and singular skew-symmetric polynomial
of that degree.
Finally, the results are applied to the construction of minimal
symmetric factorizations of skew-symmetric rational matrices.
The classical singular value decomposition for a matrix $A\in\Cmn$ is a
canonical form for $A$ that also displays the eigenvalues
of the Hermitian matrices $AA^\ast$ and $A^\ast A$. In this paper, we develop
a corresponding decomposition for $A$ that provides the Jordan canonical forms
for the complex symmetric matrices $AA^T$ and $A^TA$. More generally, we consider
the matrix triple $(A,G_1,G_2)$, where $G_1\in\CC{m}, G_2\in\CC{n}$
are invertible and either complex symmetric and complex skew-symmetric, and we
provide a canonical form under transformations of the form
$(A,G_1,G_2)\mapsto(X^T A Y, X^T G_1X, Y^T G_2Y)$, where $X,Y$ are nonsingular.
Mehta, Roughgarden, and Sundararajan recently introduced a new class of cost sharing mechanisms called acyclic mechanisms. These mechanisms achieve a slightly weaker notion of truthfulness than the well-known Moulin mechanisms, but provide additional freedom to improve budget balance and social cost approximation guarantees. In this paper, we investigate the potential of acyclic mechanisms for combinatorial optimization problems. In particular, we study a subclass of acyclic mechanisms which we term singleton acyclic mechanisms. We show that every rho-approximate algorithm that is partially increasing can be turned into a singleton acyclic mechanism that is weakly group-strategyproof and rho-budget balanced. Based on this result, we develop singleton acyclic mechanisms for parallel machine scheduling problems with completion time objectives, which perform extremely well both with respect to budget balance and social cost.
We study a planning problem arising in SDH/WDM multi-layer telecommunication network design. The goal is to find a minimum cost
installation of link and node hardware of both network layers such that traffic demands can be realized via grooming and a survivable routing. We present a mixed-integer programming formulation that takes many practical side constraints into account, including node hardware, several bitrates, and survivability against single physical node or link failures. This model is solved using a branch-and-cut approach with problem-specific preprocessing and cutting planes based on either of the two layers. On several realistic two-layer planning scenarios, we show that these cutting planes are still useful in the multi-layer context,
helping to increase the dual bound and to reduce the optimality gaps.
Bei der Produktions- und Handelsplanung treffen Energieversorgungsunternehmen eine Reihe von Entscheidungen unter unsicheren Randbedingungen. Ein Optimierungsmodell für einen mittelfristigen Planungshorizont muss diese Unsicherheiten berücksichtigen, etwa durch Einbeziehung von statistischen Modellen für die zufallsbehafteten Eingangsdaten. Dadurch ist es prinzipiell möglich, Risikobetrachtungen direkt in die Optimierung zu integrieren. Wir demonstrieren in dieser Arbeit die Möglichkeit, spezielle dynamische Risikomaße, so genannte polyedrische Risikomaße, in die Zielfunktion der Optimierung mit aufzunehmen. Im Gegensatz zu vielen anderen Ansätzen wird dadurch die Komplexität des Problems nicht wesentlich erhöht. Das vorgestellte Modell stellt ein Werkzeug zur Entscheidungsunterstützung für kleinere Marktteilnehmer hinsichtlich der Beschaffungsplanung dar. Dabei werden insbesondere konkrete mittelfristig bindende Bezugsverträge mit der Möglichkeit verglichen, die Versorgung in erster Linie auf der Basis von Spot- und Futuremarkt zu planen.
This paper demonstrates simulation tools for edge-emitting multi quantum well (MQW) lasers.
Properties of the strained MQW active region are simulated by eight-band kp calculations. Then, a 2D
simulation along the transverse cross section of the device is performed based on a drift-diffusion model,
which is self-consistently coupled to heat transport and equations for the optical field. Furthermore, a
method is described, which allows for an efficient quasi 3D simulation of dynamic properties of multisection
edge-emitting lasers.
Im Zentrum der Arbeiten soll die Chaos- und Kohärenzkontrolle von Halbleiterlasern mit gegenseitiger optischer Kopplung stehen. Diese Fragestellung ist von erheblicher praktischer Relevanz, da Rauschen und chaotisches Verhalten generelle Probleme in der optischen Hochgeschwindigkeitskommunikation sind. Die Kontrolle von optischen Systemen mit komplexer Selbstorganisation stellt aber auch aus grundsätzlicher Sicht Neuland dar. Hierfür geeignete Konzepte sind bisher weder überzeugend theoretisch beschrieben noch experimentell umgesetzt.
We study the exact recovery of signals from quantized frame coefficients. Here, the basis of the quantization
is hard thresholding, and we present a simple algorithm for the recovery of reconstructable signals. The set of
non-reconstructable signals is shown to be star-shaped and symmetric with respect to the origin. Moreover, we
provide a criterion on the frame for the boundedness of this set. In this case, we also give a priori bounds.
A basic task in signal analysis is to character-
ize data in a meaningful way for analysis and classification
purposes. Time-frequency transforms are powerful strategies
for signal decomposition, and important recent generalizations
have been achieved in the setting of frame theory. In parallel
recent developments, tools from algebraic topology, traditionally
developed in purely abstract settings, have provided new insights
in applications to data analysis. In this report, we investigate some
interactions of these tools, both theoretically and with numerical
experiments, in order to characterize signals and their frame
transforms. We explain basic concepts in persistent homology
as an important new subfield of computational topology, as well
as formulations of time-frequency analysis in frame theory. Our
objective is to use persistent homology for constructing topo-
logical signatures of signals in the context of frame theory. The
motivation is to design new classification and analysis methods by
combining the strength of frame theory as a fundamental signal
processing methodology, with persistent homology as a new tool
in data analysis.
A basic task in signal analysis is to character-
ize data in a meaningful way for analysis and classification
purposes. Time-frequency transforms are powerful strategies
for signal decomposition, and important recent generalizations
have been achieved in the setting of frame theory. In parallel
recent developments, tools from algebraic topology, traditionally
developed in purely abstract settings, have provided new insights
in applications to data analysis. In this report, we investigate some
interactions of these tools, both theoretically and with numerical
experiments, in order to characterize signals and their frame
transforms. We explain basic concepts in persistent homology
as an important new subfield of computational topology, as well
as formulations of time-frequency analysis in frame theory. Our
objective is to use persistent homology for constructing topo-
logical signatures of signals in the context of frame theory. The
motivation is to design new classification and analysis methods by
combining the strength of frame theory as a fundamental signal
processing methodology, with persistent homology as a new tool
in data analysis.
We consider scheduling on a single machine with one non-availability period to minimize the weighted sum of completion times. We provide a preemptive algorithm with an approximation ratio arbitrarily close to the Golden Ratio,~$(1+\sqrt{5})/2+\eps$, which improves on a previously best known~$2$-approximation. The non-preemptive version of the same algorithm yields a~$(2+\eps)$-approximation.
We study the global spatial regularity of solutions of elasto-plastic models with linear hardening. In order to point out the main idea, we consider a model problem on a cube, where we describe Dirichlet and
Neumann boundary conditions on the top and the bottom, respectively, and periodic boundary conditions on the
remaining faces. Under natural smoothness assumptions on the data we obtain
$u\in L^\infty((0,T);H^{3/2-\delta}(\Omega))$ for the displacements and
$z\in L^\infty((0,T);H^{1/2-\delta}(\Omega))$ for the internal variables.
The proof is based on a difference quotient technique and a reflection argument.
We study a mechanical equilibrium problem for a material consisting of two components with different densities, which allows to change the outer shape by changing the interface between the subdomains. We formulate the shape design problem of compensating unwanted workpiece
changes by controlling the interface, employ regularity results for transmission problems for a rigorous derivation of optimality conditions based on the speed method, and conclude with some numerical results based on a spline approximation of the interface.
The derivation of multiplier-based optimality conditions for elliptic mathematical programs with equilibrium constraints (MPEC) is essential for the characterization of solutions and de- velopment of numerical methods. Though much can be said for broad classes of elliptic MPECs in both polyhedric and non-polyhedric settings, the calculation becomes significantly more com- plicated when additional constraints are imposed on the control. In this paper we develop three derivation methods for constrained MPEC problems: via concepts from variational analysis, via penalization of the control constraints, and via penalization of the lower-level problem with the subsequent regularization of the resulting nonsmoothness. The developed methods and obtained results are then compared and contrasted.
A class of optimal control problems for a semilinear parabolic partial differential equation
with control and mixed control-state constraints is considered.
For this problem, a projection formula is derived
that is equivalent to the necessary optimality
conditions. As main result, the superlinear convergence of a semi-smooth Newton method is shown.
Moreover we show the numerical treatment and several numerical experiments.
Cubature methods, a powerful alternative to Monte Carlo due to Kusuoka [Adv. Math. Econ. 6, 69–83, 2004] and Lyons–Victoir [Proc. R. Soc. Lond. Ser. A 460, 169–198, 2004], involve the solution to numerous auxiliary ordinary differential equations. With focus on the Ninomiya-Victoir algorithm [Appl. Math. Fin. 15, 107–121, 2008], which corresponds to a concrete level 5 cubature method, we study some parametric diffusion models motivated from financial applications, and exhibit structural conditions under which all involved ODEs can be solved explicitly and efficiently. We then enlarge the class of models for which this technique applies, by introducing a (model-dependent) variation of the Ninomiya-Victoir method. Our method remains easy to implement; numerical examples illustrate the savings in computation time.
The LSW model with encounters has been suggested by Lifshitz and
Slyozov as a regularization of their classical mean-field model for
domain coarsening to obtain universal self-similar long-time
behavior. We rigorously establish that an exponentially decaying
self-similar solution to this model exist, and show that this
solutions is isolated in a certain function space. Our proof relies
on setting up a suitable fixed-point problem in an appropriate
function space and careful asymptotic estimates of the solution to a
corresponding homogeneous problem.
We consider a control constrained optimal control problem governed by a semilinear
elliptic equation with nonlocal interface conditions. These conditions occur during the modeling of
diffuse-gray conductive-radiative heat transfer. After stating first-order necessary conditions, secondorder
sufficient conditions are derived that account for strongly active sets. These conditions ensure
local optimality in a Ls-neighborhood whereby the underlying analysis allows to use weaker norms
than L?.
We derive formulae for the second-order subdifferential of polyhedral norms. These formulae are fully explicit in terms of initial data. In a first step we rely on the explicit formula for the coderivative of normal cone mapping to polyhedra. Though being explicit, this formula is quite involved and difficult to apply. Therefore, we derive simple formulae for the 1-norm and - making use of a recently obtained formula for the second-order subdifferential of the maximum function - for the maximum norm.
This paper concerns second-order analysis for a remarkable class of variational systems in finite-dimensional and infinite-dimensional spaces, which is particularly important for the study of optimization and equilibrium problems with equilibrium constraints. Systems of this type are described via variational inequalities over polyhedral convex sets and allow us to provide a comprehensive local analysis by using appropriate generalized differentiation of the normal cone mappings for such sets. In this paper we efficiently compute the required coderivatives of the normal cone mappings exclusively via the initial data of polyhedral sets in reflexive Banach spaces. This provides the main tools of second-order variational analysis allowing us, in particular, to derive necessary and sufficient conditions for robust Lipschitzian stability of solution maps to parameterized variational inequalities with evaluating the exact bound of the corresponding Lipschitzian moduli. The efficient coderivative calculations and characterizations of robust stability obtained in this paper are the first results in the literature for the problems under consideration in infinite-dimensional spaces. Most of them are also new in finite dimensions.
We consider risk-averse formulations of multistage stochastic linear programs. For these formulations, based on convex combinations of spectral risk measures, risk-averse dynamic programming equations can be written. As a result, the Stochastic Dual Dynamic Programming
(SDDP) algorithm can be used to obtain approximations of
the corresponding risk-averse recourse functions. This allows us to define a risk-averse nonanticipative feasible policy for thestochastic linear program. Formulas for the cuts that approximate the recourse functions are given.
この論文ではソフトウェア・パッケージSCIP Optimization Suite を紹介し,その3つの構成要素:モデリン グ言語Zimpl, 線形計画(LP: linear programming) ソルバSoPlex, そして,制約整数計画(CIP: constraint integer programming) に対するソフトウェア・フレームワークSCIP, について述べる.本論文では,この3つの 構成要素を利用して,どのようにして挑戦的な混合整数線形計画問題(MIP: mixed integer linear optimization problems) や混合整数非線形計画問題(MINLP: mixed integer nonlinear optimization problems) をモデル化 し解くのかを説明する.SCIP は,現在,最も高速なMIP,MINLP ソルバの1つである.いくつかの例により, Zimpl, SCIP, SoPlex の利用方法を示すとともに,利用可能なインタフェースの概要を示す.最後に,将来の開 発計画の概要について述べる.
We study two related problems in non-preemptive scheduling and packing of malleable tasks with precedence constraints to minimize the makespan. We distinguish the scheduling variant, in which we allow the free choice of processors, and the packing variant, in which a task must be assigned to a contiguous subset of processors.
For precedence constraints of bounded width, we completely resolve the complexity status for any particular problem setting concerning width bound and number of processors, and give polynomial-time algorithms with best possible performance. For both, scheduling and packing malleable tasks, we present an FPTAS for the NP-hard problem variants and exact algorithms for all remaining special cases. To obtain the positive results, we do not require the common monotonous penalty assumption on processing times, whereas our hardness results hold even when assuming this restriction.
With the close relation between contiguous scheduling and strip packing, our FPTAS
is the first (and best possible) constant factor approximation for (malleable) strip packing under special precedence constraints.
A framework for the reduction of scenario trees as inputs of (linear) multistage stochastic programs is provided such that optimal values and approximate solution sets remain close to each other. The argument is based on upper bounds of the Lr-distance and the filtration distance, and on quantitative stability results for multistage stochastic programs. The important difference from scenario reduction in two-stage models consists in incorporating the filtration distance. An algorithm is presented for selecting and removing nodes of a scenario tree such that a prescribed error tolerance is met. Some numerical experience is reported.
An important issue for solving multistage stochastic programs consists in the approximate representation of the (multivariate) stochastic input process in the form of a scenario tree. In this paper, forward and backward approaches are developed for generating scenario trees out of an initial fan of individual scenarios. Both approaches are motivated by the recent stability result in [15] for optimal values of multistage stochastic programs. They are based on upper bounds for the two relevant ingredients of the stability estimate, namely, the probabilistic and the filtration distance, respectively. These bounds allow to control the process of recursive scenario reduction [13] and branching. Numerical experience is reported for constructing multivariate scenario trees in electricity portfolio management.
Stochastic optimization techniques are highly relevant for applications in electricity production and trading since, in particular after the deregulations of many electricity markets, there is a high number of uncertainty factors (e.g., demand, spot prices) to be considered that can be described reasonably by statistical models. Here, we want to highlight two aspects of this approach: scenario tree approximation and risk aversion. The former is a procedure to replace a general statistical model (probability distribution), which makes the optimization problem intractable, suitably by a finite discrete distribution (scenarios). This is typically an indispensable first step towards a solution of a stochastic optimization model. On the other hand, this is a highly sensitive concern, in particular if dynamic decision structures are involved (multistage stochastic programming). Then, the approximate distribution must exhibit tree structure. Moreover, it is of interest to get by with a moderate number of scenarios to have the resulting problem tractable. In any case, it has to be relied on suitably stability results to ensure that the obtained results are indeed related to the original (infinite dimensional) problem. These stability results involve probability distances and, for the multistage case, a filtration distance that evaluates the information increase over time. We present respective approximation schemes relying on Monte Carlo sampling and scenario reduction and combining techniques. The second topic of this talk is risk aversion. Namely, we present the approach of polyhedral risk measures which are given as (the optimal values of) certain simple stochastic programs. Well-known risk measures such as CVaR and expected polyhedral utility belong to this class and, moreover, multiperiod risk measures for multistage stochastic programs are suggested. For stochastic programs incorporating polyhedral risk measures it has been shown that numerical tractability as well as stability results known for classical (non-risk-averse) stochastic programs remain valid. In particular, the same scenario approximation methods can be used. Finally, we present illustrative numerical results from an electricity portfolio optimization model for a municipal power utility.
Stochastic programming problems appear as mathematical models for optimization problems under stochastic uncertainty. Most computational approaches for solving such models are based on approximating the underlying probability distribution by a probability measure with finite support. Since the computational complexity for solving stochastic programs gets worse when increasing the number of atoms (or scenarios), it is sometimes necessary to reduce their number. Techniques for scenario reduction often require fast heuristics for solving combinatorial subproblems. Available techniques are reviewed and open problems are discussed.
Discrete approximations to chance constrained and mixed-integer two-stage stochastic programs require moderately sized scenario
sets. The relevant distances of (multivariate) probability
distributions for deriving quantitative stability results for such stochastic programs are $\mathcal{B}$-discrepancies, where the class $\mathcal{B}$ of Borel sets depends on their structural properties.
Hence, the optimal scenario reduction problem for such models is stated with respect to $\mathcal{B}$-discrepancies. In this paper,
upper and lower bounds, and some explicit solutions for optimal scenario reduction problems are derived. In addition, we develop
heuristic algorithms for determining nearly optimally reduced probability measures, discuss the case of the cell discrepancy (or
Kolmogorov metric) in some detail and provide some numerical experience.
Portfolio and risk management problems of power
utilities may be modeled by multistage stochastic programs. These
models use a set of scenarios and corresponding probabilities
to model the multivariate random data process (electrical load,
stream flows to hydro units, and fuel and electricity prices). For
most practical problems the optimization problem that contains
all possible scenarios is too large. Due to computational complexity
and to time limitations this program is often approximated by
a model involving a (much) smaller number of scenarios. The proposed
reduction algorithms determine a subset of the initial scenario
set and assign new probabilities to the preserved scenarios.
The scenario tree construction algorithms successively reduce the
number of nodes of a fan of individual scenarios by modifying the
tree structure and by bundling similar scenarios. Numerical experience
is reported for constructing scenario trees for the load
and spot market prices entering a stochastic portfolio management
model of a German utility
Stability-based methods for scenario generation in stochastic programming are reviewed. In particular, we briefly discuss Monte Carlo sampling, Quasi-Monte Carlo methods, quadrature rules based on sparse grids and optimal quantization. In addition, we provide some convergence results based on recent developments in multivariate integration. The method of optimal scenario reduction and techniques for scenario trees generation are also reviewed.
The paper is devoted to Schroedinger operators on bounded intervals of the real axis with dissipative boundary conditions. In the framework of the Lax-Phillips scattering theory the asymptotic behaviour of the phase shift is investigated in detail and its relation to the spectral shift is discussed, in particular, trace formula and Birman-Krein formula are verified directly. The results are used for dissipative Schroedinger-Poisson systems.
Scalable Frames
(2012)
Tight frames can be characterized as those frames which possess optimal numerical stability properties. In this paper, we consider the question of modifying a general frame to generate a tight frame by rescaling its frame vectors; a process which can also be regarded as perfect preconditioning of a frame by a diagonal operator. A frame is called scalable, if such a diagonal operator exists. We derive various characterizations of scalable frames, thereby including the infinite-dimensional situation. Finally, we provide a geometric interpretation of scalability in terms of conical surfaces.
We estimate potential energy savings in IP-over-WDM networks achieved by switching off router line cards in low-demand hours. We compare three approaches to react on dynamics in the IP traffic over time, FUFL,
DUFL and DUDL. They provide different levels of freedom in adjusting the routing of lightpaths in the WDM layer and the routing of demands in the IP layer. Using MILP models based on realistic network topologies and node architectures as well as realistic demands, power, and cost values, we show that already a simple monitoring of the lightpath utilization in order to deactivate empty line cards (FUFL) brings substantial
benefits. The most significant savings, however, are achieved by rerouting traffic in the IP layer (DUFL), which allows emptying and deactivating lightpaths together with the corresponding line cards. A
sophisticated reoptimization of the virtual topologies and the routing in the optical domain for every demand scenario (DUDL) yields nearly no additional profits in the considered networks.
We define a risk averse nonanticipative feasible policy for multistage stochastic programs and propose a methodology to implement it. The approach is based on dynamic programming equations written for a risk averse formulation of the problem.
This formulation relies on a new class of multiperiod risk functionals called extended polyhedral risk measures. Dual representations of such risk functionals are given and used to derive conditions of coherence. In the one-period case, conditions for convexity and consistency with second order stochastic dominance are also provided. The risk averse dynamic programming equations are specialized considering convex combinations of one-period extended polyhedral risk measures such as spectral risk measures.
To implement the proposed policy, the approximation of the risk averse recourse functions for stochastic linear programs is discussed. In this context, we detail a stochastic dual dynamic programming algorithm which converges to the optimal value of the risk averse problem.
The line planning problem is one of the fundamental problems in strategic planning of public and rail transport. It consists in finding lines and corresponding frequencies in a network such that a giv en demand can be satisfied. There are two objectives. passengers want to minimize travel times, the transport company wishes to minimize operating costs. We investigate three variants of a multi-commo dity flow model for line planning that differ with respect to passenger routings. The first model allows arbitrary routings, the second only unsplittable routings, and the third only shortest path rou tings with respect to the network. We compare these models theoretically and computationally on data for the city of Potsdam.
We study the dynamics of a ring of unidirectionally coupled autonomous
Duffing oscillators. Starting from a situation where the individual
oscillator without coupling has only trivial equilibrium dynamics,
the coupling induces complicated transitions to periodic, quasiperiodic,
chaotic, and hyperchaotic behavior. We study these transitions in
detail for small and large numbers of oscillators. Particular attention
is paid to the role of unstable periodic solutions for the appearance
of chaotic rotating waves, spatiotemporal structures and the Eckhaus
effect for a large number of oscillators. Our analytical and numerical
results are confirmed by a simple experiment based on the electronic
implementation of coupled Duffing oscillators.
Primal heuristics are an important component of state-of-the-art codes for mixed integer programming. In this paper, we focus on primal heuristics that only employ computationally inexpensive procedures such as rounding and logical deductions (propagation). We give an overview of eight different approaches. To assess the impact of these primal heuristics on the ability to find feasible solutions, in particular early during search, we introduce a new performance measure, the primal integral. Computational experiments evaluate this and other measures on MIPLIB~2010 benchmark instances.
This paper presents some weighted H2-regularity estimates for a model Poisson problem with discontinuous coefficient at high contrast. The coefficient represents a random particle reinforced composite material, i.e., highly conducting circular particles are randomly distributed in some background material with low conductivity. Based on these regularity results we study the percolation of thermal conductivity of the material as the volume fraction of the particles is close to the jammed state. We proof that the characteristic percolation behavior of the material is well captured by standard conforming finite element models.
We examine robustness of exponential dichotomies of boundary value problems for general linear first-order one-dimensional hyperbolic systems. The boundary conditions are supposed to be of types ensuring smoothing solutions in finite time, which includes reflection boundary conditions. We show that the dichotomy survives in the space of continuous functions under small perturbations of all coefficients in the differential equations.
We investigate the problem of maximizing the robust utility functional inf QEQ EQu(X).
We give the dual characterization for its solution for both a complete and an incomplete
market model. To this end, we introduce the new notion of reverse f-projections and
use techniques developed for f-divergences. This is a suitable tool to reduce the robust
problem to the classical problem of utility maximization under a certain measure: the
reverse f-projection. Furthermore, we give the dual characterization for a closely related
problem, the minimization of expenditures given a minimum level of expected utility in
a robust setting and for an incomplete market.
Resolving thin conducting sheets for shielding or even skin layers inside by the mesh of numerical methods like the finite element method (FEM) can be avoided by using impedance transmission conditions (ITCs). Those ITCs shall provide an accurate approximation for small sheet thicknesses $d$, where the accuracy is best possible independent of the conductivity or the frequency being small or large -- this we will call robustness. We investigate the accuracy and robustness of popular and recently developed ITCs, and propose robust ITCs which are accurate up to $O(d^2)$.
We consider scheduling to minimize the weighted sum of completion
times on a single machine that may experience unexpected changes in
processing speed or even full breakdowns. We design a polynomial
time deterministic algorithm that finds a robust prefixed scheduling
sequence with a solution value within~$4$ times the value
an optimal clairvoyant algorithm can achieve, knowing the
disruptions in advance and even being allowed to interrupt jobs at
any moment. A randomized version of this algorithm attains in
expectation a ratio of~$e$ w.r.t. a clairvoyant optimum.
We show that such a ratio can never be achieved by any deterministic
algorithm by proving that the price of robustness of any such
algorithm is at least~$1+\sqrt{3} \approx 2.73205>e$.
As a direct consequence of our results, the question whether a
constant approximation algorithm exists for the problem with given
machine unavailability periods is answered affirmatively. We
complement this result by an FPTAS for the preemptive and non-preemptive special case with a single
non-available period.
The key to molecular conformation dynamics is the direct identification of metastable conformations, which are almost invariant sets of molecular dynamical systems. Once some reversible Markov operator has been discretized, a generalized symmetric stochastic matrix arises. This matrix can be treated by Perron cluster analysis, a rather recent method involving a Perron cluster eigenproblem. The paper presents an improved Perron cluster analysis algorithm, which is more robust than earlier suggestions. Numerical examples are included.
Traffic in communication networks fluctuates heavily over time. Thus, to avoid capacity bottlenecks,
operators highly overestimate the traffic volume during network planning. In this paper we consider
telecommunication network design under traffic uncertainty, adapting the robust optimization approach
of Bertsimas and Sim [21]. We present three different mathematical formulations for this problem, provide valid inequalities,
study the computational implications, and evaluate the realized robustness.
To enhance the performance of the mixed-integer programming solver we derive robust cutset inequalities generalizing their deterministic counterparts. Instead of a single cutset inequality for every
network cut, we derive multiple valid inequalities by exploiting the extra variables available in the robust formulations. We show that these inequalities define facets under certain conditions and that they
completely describe a projection of the robust cutset polyhedron if the cutset consists of a single edge.
For realistic networks and live traffic measurements we compare the formulations and report on the
speed up by the valid inequalities. We study the “price of robustness” and evaluate the approach by
analyzing the real network load. The results show that the robust optimization approach has the potential
to support network planners better than present methods.
Three families of transmission conditions of different order are proposed for thin conducting sheets in the eddy current model. Resolving the thin sheet by a finite element mesh is often not possible. With these transmission conditions only the middle curve, but not the thin sheet itself, has not to be resolved by a finite element mesh. The families of transmission conditions are derived by an asymptotic expansion for small sheet thicknesses $\eps$, where each family results from a different asymptotic framework. In the first asymptotic framework the conductivity remains constant, scales with $1/\eps$ in the second and with $1/\eps^2$ in the third. The different asymptotics lead to different limit conditions, namely the vanishing sheet, a non-trivial borderline case, and the impermeable sheet, as well as different transmission conditions of higher orders. We investigated the stability, the convergence of the transmission conditions as well as their robustness. We call transmission conditions robust, if they provide accurate approximation for a wide range of sheet thicknesses and conductivities. We introduce an ordering of transmission conditions for the same sheet with respect to the robustness, and observe that the condition derived for the $1/\eps$ asymptotics is the most robust limit condition, contrary to order 1 and higher, where the transmission conditions derived for the $1/\eps^2$ asymptotics turn out to be most robust.
We present new residual estimates based on Kato's square root theorem for spectral approximations of diagonalizable non-self-adjoint differential operators of convection-diffusion-reaction type. These estimates are incorporated as part of an hp-adaptive finite element algorithm for practical spectral computations, where it is shown that the
resulting a posteriori error estimates are reliable. Provided experiments demonstrate the efficiency and reliability of our approach.
We consider a sorting problem from railway optimization
called train classification: incoming trains are split up into their single
cars and reassembled to form new outgoing trains. Trains are subject
to delay, which may turn a prepared sorting schedule infeasible for the
disturbed situation. The classification methods applied today deal with
this issue by completely disregarding the input order of cars, which provides
robustness against any amount of disturbance but also wastes the
potential contained in the a priori knowledge about the input.
We introduce a new method that provides a feasible sorting schedule for
the expected input and allows to
flexibly insert additional sorting steps
if the schedule has become infeasible after revealing the disturbed input.
By excluding disruptions that almost never occur from our consideration,
we obtain a classification process that is quicker than the current railway
practice but still provides robustness against realistic delays. In fact, our
algorithm allows
flexibly trading off fast classification against high degrees
of robustness depending on the respective need. We further explore
this
flexibility in experiments on real-world traffic data, underlining our
algorithm improves on the methods currently applied in practice.
The efficient and reliable computation of guided modes in photonic crystal wave-guides is of great importance for designing optical devices. Transparent boundary conditions based on Dirichlet-to-Neumann operators allow for an exact computation of well-confined modes and modes close to the band edge in the sense that no modelling error is introduced. The well-known super-cell method, on the other hand, introduces a modelling error which may become prohibitively large for guided modes that are not well-confined. The Dirichlet-to-Neumann transparent boundary conditions are, however, not applicable for all frequencies as they are not uniquely defined and their computation is unstable for a countable set of frequencies that correspond to so called Dirichlet eigenvalues. In this work we describe how to overcome this theoretical difficulty introducing Robin-to-Robin transparent boundary conditions whose construction do not exhibit those forbidden frequencies. They seem, hence, well suited for an exact and reliable computation of guided modes in photonic crystal wave-guides.
Uncertainty is inevitable when solving science and engineering application problems. In the face of
uncertainty, it is essential to determine robust and risk-averse solutions. In this work,
we consider a class of PDE-constrained optimization problems in which the PDE coefficients
and inputs may be uncertain. We introduce two approximations for minimizing the
conditional value-at-risk for such PDE-constrained optimization problems. These approximations are based
on the primal and dual formulations of the conditional value-at-risk. For the primal problem,
we introduce a smooth approximation of the conditional value-at-risk in order to utilize
derivative-based optimization algorithms and to take advantage of the convergence properties
of quadrature-based discretizations. For this smoothed conditional value-at-risk, we prove
differentiability as well as consistency of our approximation. For the dual problem, we
regularize the inner maximization problem, rigorously derive optimality conditions, and demonstrate
the consistency of our approximation. Furthermore, we propose a fixed-point iteration that takes
advantage of the structure of the regularized optimality conditions and provides a means of calculating
worst-case probability distributions based on the given probability level. We conclude with numerical
results.
To address the plurality of interpretations of the subjective notion of risk, we describe it by means of a risk order and concentrate on the context invariant features of diversification and monotonicity. Our main results are uniquely characterized robust representations of lower semicontinuous risk orders on vector spaces and convex sets. This representation covers most instruments related to risk and allow for a differentiated interpretation depending on the underlying context which is illustrated in different settings: For random variables, risk perception can be interpreted as model risk, and we compute among others the robust representation of the economic index of riskiness. For lotteries, risk perception can be viewed as distributional risk and we study the "Value at Risk". For consumption patterns, which excerpt an intertemporality dimension in risk perception, we provide an interpretation in terms of discounting risk and discuss some examples.
We study the risk assessment of uncertain cash flows in terms of dynamic convex risk measures for processes as introduced in Cheridito, Delbaen, and Kupper (2006). These risk measures take into account not only the amounts but also the timing of a cash flow. We discuss their robust representation in terms of suitably penalized probability measures on the optional $\sigma$-field. This yields an explicit analysis both of model and discounting ambiguity. We focus on supermartingale criteria for time consistency. In particular we show how ``bubbles'' may appear in the dynamic penalization, and how they cause a breakdown of asymptotic safety of the risk assessment procedure.
Revlex-Initial 0/1-Polytopes
(2005)
Energetic solutions to rate-independent processes are usually constructed via time-incremental minimization problems. In this work we show that all energetic solutions can be approximated by incremental problems if we allow approximate minimizers, where the error in minimization has to be of the order of the time step. Moreover, we study sequences of problems where the energy functionals have a Gamma limit.
We consider the problem of numerical approximation for forward-backward stochastic
differential equations with drivers of quadratic growth (qgFBSDE). To illustrate the significance
of qgFBSDE, we discuss a problem of cross hedging of an insurance related financial
derivative using correlated assets. For the convergence of numerical approximation schemes for
such systems of stochastic equations, path regularity of the solution processes is instrumental.
We present a method based on the truncation of the driver, and explicitly exhibit error estimates
as functions of the truncation height. We discuss a reduction method to FBSDE with globally
Lipschitz continuous drivers, by using the Cole-Hopf exponential transformation. We finally
illustrate our numerical approximation methods by giving simulations for prices and optimal
hedges of simple insurance derivatives.
Resolving the apparent gap in complexity between
simulated and measured kinetics of biomolecules
(2012)
Molecular simulations of biomolecules often reveal a complex picture of the their kinetics,
whereas kinetic experiments typically seem to indicate considerably simpler two- or three-state
kinetics. Markov state models (MSM) provide a tool to link between simulation and experi-
ment, and to resolve this apparent contradiction.
In this paper we consider the optimal stopping problem for general dynamic monetary utility functionals. Sufficient conditions for the Bellman principle and the existence of optimal stopping times are provided. Particular attention is payed to representations which allow for a numerical treatment in real situations. To this aim, generalizations of standard evaluation methods like policy iteration, dual and consumption based approaches are developed in the context of general dynamic monetary utility functionals. As a result, it turns out that the possibility of a particular generalization depends on specific properties of the utility functional under consideration.
Under high load, the automated dispatching of service vehicles for
the German Automobile Association (ADAC) must reoptimize a dispatch for
100{150 vehicles and 400 requests in about ten seconds to near optimality. In
the presence of service contractors, this can be achieved by the column generation
algorithm ZIBDIP. In metropolitan areas, however, service contractors
cannot be dispatched automatically because they may decline. The problem:
a model without contractors yields larger optimality gaps within ten seconds.
One way out are simplified reoptimization models. These compute a shortterm
dispatch containing only some of the requests: unknown future requests
will in
uence future service anyway. The simpler the models the better the
gaps, but also the larger the model error. What is more significant: reoptimization
gap or reoptimization model error? We answer this question in
simulations on real-world ADAC data: only the new models ShadowPrice and
ZIBDIPdummy can keep up with ZIBDIP.
RENS – the optimal rounding
(2012)
This article introduces RENS, the relaxation enforced neighborhood search, a large neighborhood search algorithm for mixed integer nonlinear programming (MINLP) that uses a sub-MINLP to explore the set of feasible roundings of an optimal solution x' of a linear or nonlinear relaxation. The sub-MINLP is constructed by fixing integer variables x_j with x'_j in Z and bounding the remaining integer variables to x_j in {floor(x'_j), ceil(x'_j)}. We describe two different applications of RENS: as a standalone algorithm to compute an optimal rounding of the given starting solution and as a primal heuristic inside a complete MINLP solver.
We use the former to compare different kinds of relaxations and the impact of cutting planes on the roundability of the corresponding optimal solutions. We further utilize RENS to analyze the performance of three rounding heuristics implemented in the branch-cut-and-price framework SCIP. Finally, we study the impact of RENS when it is applied as a primal heuristic inside SCIP.
All experiments were performed on three publically available test sets of mixed integer linear programs (MIPs), mixed integer quadratically constrained programs (MIQCPs), and MINLPs, using solely software which is available in source code.
It turns out that for these problem classes 60% to 70% of the instances have roundable relaxation optima and that the success rate of RENS does not depend on the percentage of fractional variables. Last but not least, RENS applied as primal heuristic complements nicely with existing root node heuristics in SCIP and improves the overall performance.
In the recent years, a couple of quite successful large neighborhood search heuristics for mixed integer programs has been published.
Up to our knowledge, all of them are improvement heuristics.
We present a new start heuristic for general MIPs working in the spirit of large neighborhood search.
It constructs a sub-MIP which represents the space of all feasible roundings of some fractional point - normally the optimum of the LP-relaxation of the original MIP.
Thereby, one is able to determine whether a point can be rounded to a feasible solution and which is the best possible rounding.
Furthermore, a slightly modified version of RENS proves to be a well-performing heuristic inside the branch-cut-and-price-framework SCIP.
Relating Attractors and Singular Steady States in the Logical Analysis of Bioregulatory Networks
(2007)
In 1973 R. Thomas introduced a logical approach to modeling and analysis of
bioregulatory networks. Given a set of Boolean functions describing the
regulatory interactions, a state transition graph is constructed that captures
the dynamics of the system. In the late eighties, Snoussi and Thomas extended
the original framework by including singular values corresponding to
interaction thresholds. They showed that these are needed for a refined
understanding of the network dynamics.
In this paper, we study systematically singular steady states, which are
characteristic of feedback circuits in the interaction graph, and relate them
to the type, number and cardinality of attractors in the state transition
graph. In particular, we derive sufficient conditions for regulatory networks
to exhibit multistationarity or oscillatory behavior, thus giving a partial
converse to the well-known Thomas conjectures.
The numerical solution of the Dirichlet boundary optimal control problem of the Navier-Stokes equations in presence of
pointwise state constraints is investigated. Two different regularization techniques are considered. First, a Moreau-Yosida
regularization of the problem is studied. Optimality conditions are derived and the convergence of the regularized solutions
towards the original one is proved. A source representation of the control combined with a Lavrentiev type regularization
strategy is also presented. The analysis concerning optimality conditions and convergence of the regularized solutions is
carried out. In the last part of the paper numerical experiments are presented. For the numerical solution of each
regularized problem a semi-smooth Newton method is applied.
A state-constrained optimal control problem with nonlocal radiation interface conditions arising from the modeling of crystal growth processes is considered. The problem is approximated by a Moreau-Yosida type regularization. Optimality conditions for the regularized problem are derived and the convergence of the regularized problems is shown. In the last part of the paper, some numerical results are presented.
Regularization and Numerical Solution of the Inverse Scattering Problem using Shearlet Frames
(2014)
Regularization techniques for the numerical solution of nonlinear inverse scattering
problems in two space dimensions are discussed. Assuming that the boundary of a scatterer is its most prominent feature, we exploit as model the class of cartoon-like functions.
Since functions in this class are asymptotically optimally sparsely approximated by shearlet frames, we consider shearlets as a means for the regularization in a Tikhonov method.
We examine both directly the nonlinear problem and a linearized problem obtained by
the Born approximation technique. As problem classes we study the acoustic inverse
scattering problem and the electromagnetic inverse scattering problem. We show that
this approach introduces a sparse regularization for the nonlinear setting and we present
a result describing the behavior of the local regularity of a scatterer under linearization,
which shows that the linearization does not affect the sparsity of the problem. The analytical results are illustrated by numerical examples for the acoustic inverse scattering problem that highlight the effectiveness of this approach.
The chemical master equation is a fundamental equation in chemical kinetics. It is an adequate substitute for the classical reaction-rate equations whenever stochastic effects become relevant. In the present paper we give a simple argument showing that the solutions of a large class of chemical master equations, including all those in which elementary reactions between two and more molecules do not generate a larger number of molecules than existed before,
are bounded in weighted $\ell_1$-spaces. As an illustration for the implications of this kind of regularity we analyze the effect of truncating the state space. This leads to an error analysis of the finite state projection of the chemical master equation, an approximation that underlies many numerical methods.
The chemical master equation is a fundamental equation in chemical kinetics. It underlies the classical reaction-rate equations and takes stochastic effects into account. In this paper we give a simple argument showing that the solutions of a large class of chemical master equations are bounded in weighted $\ell_1$-spaces and possess high-order moments. This class includes all equations in which no reactions between two or more already present molecules and further external reactants occur that add mass to the system. As an illustration for the implications of this kind of regularity, we analyze the effect of truncating the state space. This leads to an error analysis for the finite state projections of the chemical master equation, an approximation that forms the basis of many numerical methods.
We consider regular polynomial interpolation algorithms on recursively defined sets of interpolation points which approximate global solutions of arbitrary well-posed systems of linear partial differential equations. Convergence of the "limit" of the recursively constructed family of polynomials to the solution and error estimates are obtained from a priori estimates for some standard classes of linear partial differential equations, i.e. elliptic and hyperbolic equations. Another variation of the algorithm allows to construct polynomial interpolations which preserve systems of linear partial differential equations at the interpolation points. We show how this can be applied in order to compute higher order terms of WKB-approximations of fundamental solutions of a large class of linear parabolic equations. The error estimates are sensitive to the regularity of the solution. Our method is compatible with recent developments for solution of higher dimensional partial differential equations, i.e. (adaptive) sparse grids, and weighted Monte-Carlo, and has obvious applications to mathematical finance and physics.
Regular Lagrange multipliers for control problems with mixed pointwise control-state constraints
(2004)
A class of quadratic optimization problems in Hilbert spaces is considered, where
pointwise box constraints and constraints of bottleneck type are given. The main focus is to prove the
existence of regular Lagrange multipliers in L2-spaces. This question is solved by investigating the
solvability of a Lagrange dual quadratic problem. The theory is applied to different optimal control
problems for elliptic and parabolic partial differential equations with mixed pointwise control-state
constraints.
In this paper we develop several regression algorithms for solving general stochastic optimal control problems via Monte Carlo. This type of algorithms is particulary useful for problems with a high-dimensional state space and complex dependence structure of the underlying Markov process with respect to some control. The main idea behind the algorithms is to simulate a set of trajectories under some reference measure and to use the Bellman principle combined with fast methods for approximating conditional expectations and functional optimization. Theoretical properties of the presented algorithms are investigated and the convergence to the optimal solution is proved under mild assumptions. Finally, we present numerical results for the problem of pricing a high-dimensional Bermudan basket option under transaction costs in a financial market with a large investor.
We consider a time-dependent optimal control problem, where the state
evolution is described by an ODE. There is a variety of methods for the treatment
of such problems. We prefer to view them as boundary value problems and apply to
them the Riccati approach for non-linear BVPs with separated boundary conditions.
There are many relationships between multiple shooting techniques, the Riccati
approach and the Pantoja method, which describes a computationally efficient
stage-wise construction of the Newton direction for the discrete-time optimal control
problem.
We present an efficient implementation of this approach. Furthermore, the wellknown
checkpointing approach is extended to a `nested checkpointing` for multiple
transversals. Some heuristics are introduced for an efficient construction of nested
reversal schedules. We discuss their benefits and compare their results to the optimal
schedules computed by exhaustive search techniques.
In high accuracy numerical simulations and optimal control of time-dependent processes, often both many time steps and fine spatial discretizations are needed. Adjoint gradient computation, or post-processing of simulation results, requires the storage of the solution trajectories over the whole time, if necessary together with the adaptively refined spatial grids. In this paper we discuss various techniques to reduce the memory requirements, focusing first on the storage of the solution data, which typically are double precision floating point values. We highlight advantages and disadvantages of the different approaches. Moreover, we present an algorithm for the efficient storage of adaptively refined, hierarchic grids, and the integration with the compressed storage of solution data.
In this paper we investigate two different recoverable robust models to deal with cost uncertainties in a shortest path problem. Recoverable robustness extends the classical concept of robustness to deal with uncertainties by incorporating limited recovery actions after the
full data are revealed. Our first model focuses on the case where the recovery actions are quite restricted: after a simple path is fixed in the first stage, in the second stage, after all data are revealed, any path containing at most k new arcs may be chosen.
Thus, the parameter k can be interpreted as a mediator between
robust optimization - no changes allowed - and optimization
on the fly - an arbitrary solution can be chosen. Considering three
classical scenario sets, which model uncertainties in the cost function,
we show that this new problem is strongly NP-hard in all
these cases and is not approximable, unless P=NP.
This is in contrast to the robust shortest path problem, where, for
example, an optimal solution can be computed efficiently for interval
and Gamma-scenarios. For series-parallel graphs and interval scenarios,
we present a polynomial time algorithm for this recoverable robust
setting.
In our second model the recovery set, i.e., the set of paths selectable
in the second stage is not limited, but deviating from the previous
choice comes at extra cost. Thus, a path chosen in the first stage
produces renting costs modeled as an alpha-fraction of the scenario
cost. For an arc taken in the second stage the remaining cost needs
to be paid in addition to some extra inflation cost modeled by a beta-fraction
of the scenario cost, if the arc was not reserved beforehand. The
complexity status of this problem is similar to the robust case. Yet,
for Gamma-scenarios the problem is again strongly NP-hard,
but can be approximated.
The knapsack problem is one of the basic problems in combinatorial optimization. In real-world applications it is often part of a more complex problem. Examples are machine capacities in production planning or bandwidth restrictions in telecommunication network design. Due to unpredictable future settings or erroneous data, parameters of such a subproblem are subject to uncertainties.
In high risk situations a robust approach should be chosen to deal with these uncertainties.
Unfortunately, classical robust optimization outputs solutions with little profit by prohibiting any adaption of the solution when the actual realization of the uncertain parameters is known.
This ignores the fact that in most settings minor changes to a previously determined solution are possible. To overcome these drawbacks we allow a limited recovery of a previously fixed item set as soon as the data are known by deleting at most k items and adding up to l new items.
We consider the complexity status of this recoverable robust knapsack problem and extend the classical concept of cover inequalities to obtain stronger polyhedral descriptions. Finally, we present two extensive computational studies to investigate the influence of parameters k and l to the objective and evaluate the effectiveness of our new class of valid inequalities.
In this paper, we investigate the recoverable robust knapsack problem,
where the uncertainty of the item weights follows the approach of Bertsimas and
Sim. In contrast to the robust approach, a limited recovery action is allowed,
i.e., up to k items may be removed when the actual weights are known. This problem
is motivated by the assignment of traffic nodes to antennas in wireless network
planning. Starting from an exponential min-max optimization model, we derive an
integer linear programming formulation of quadratic size. In a preliminary computational
study, we evaluate the gain of recovery using realistic planning data.