Refine
Year of publication
- 2012 (153) (remove)
Keywords
- Robust Optimization (4)
- primal heuristic (4)
- Large Neighborhood Search (3)
- Navier-Stokes equations (3)
- molecular dynamics (3)
- stochastic programming (3)
- ANOVA decomposition (2)
- FPTAS (2)
- Knapsack Problem (2)
- Markov models (2)
This paper provides a generic formulation for rolling stock planning problems in the context of intercity passenger traffic. The main contributions are a graph theoretical model and a Mixed-Integer-Programming formulation that integrate all main requirements of the considered Vehicle-Rotation-Planning problem (VRPP). We show that it is possible to solve this model for real-world instances provided by our industrial partner DB Fernverkehr AG using modern algorithms and computers.
We formulate the static mechanical coupling of a geometrically exact Cosserat rod
to a nonlinearly elastic continuum. In this setting, appropriate coupling conditions have
to connect a one-dimensional model with director variables to a three-dimensional
model without directors.
Two alternative coupling conditions are proposed,
which correspond to two different configuration trace spaces.
For both we show existence of solutions of the coupled problems, using the direct
method of the calculus of variations. From the first-order optimality conditions
we also derive the corresponding conditions for the dual variables. These are
then interpreted in mechanical terms.
We consider discretizations for reaction-diffusion systems with nonlinear
diffusion in two space dimensions. The applied model allows to handle heterogeneous
materials and uses the chemical potentials of the involved species as primary variables.
We propose an implicit Voronoi finite volume discretization on regular Delaunay
meshes that allows to prove uniform, mesh-independent global upper and lower L1
bounds for the chemical potentials. These bounds provide the main step for a convergence
analysis for the full discretized nonlinear evolution problem. The fundamental
ideas are energy estimates, a discrete Moser iteration and the use of discrete
Gagliardo-Nirenberg inequalities. For the proof of the Gagliardo-Nirenberg inequalities
we exploit that the discrete Voronoi finite volume gradient norm in 2d coincides
with the gradient norm of continuous piecewise linear finite elements.
We present Undercover, a primal heuristic for nonconvex mixed-integer nonlinear programming (MINLP) that explores a mixed-integer linear subproblem (sub-MIP) of a given MINLP. We solve a vertex covering problem to identify a minimal set of variables that need to be fixed in order to linearize each constraint, a so-called cover. Subsequently, these variables are fixed to values obtained from a reference point, e.g., an optimal solution of a linear relaxation. We apply domain propagation and conflict analysis to try to avoid infeasibilities and learn from them, respectively. Each feasible solution of the sub-MIP corresponds to a feasible solution of the original problem.
We present computational results on a test set of mixed-integer quadratically constrained programs (MIQCPs) and general MINLPs from MINLPLib. It turns out that the majority of these instances allow for small covers. Although general in nature, the heuristic appears most promising for MIQCPs, and complements nicely with existing root node heuristics in different state-of-the-art solvers.
We present Undercover, a primal heuristic for mixed-integer nonlinear programming (MINLP). The heuristic constructs a mixed-integer linear subproblem (sub-MIP) of a given MINLP by fixing a subset of the variables. We solve a set covering problem to identify a minimal set of variables which need to be fixed in order to linearise each constraint. Subsequently, these variables are fixed to approximate values, e.g. obtained from a linear outer approximation. The resulting sub-MIP is solved by a mixed-integer linear programming solver. Each feasible solution of the sub-MIP corresponds to a feasible solution of the original problem. Although general in nature, the heuristic seems most promising for mixed-integer quadratically constrained programmes (MIQCPs). We present computational results on a general test set of MIQCPs selected from the MINLPLib.
The pole condition approach for deriving transparent boundary conditions is extended to the time-dependent, two-dimensional case. Non-physical modes of the solution are identified by the position of poles of the solution's spatial Laplace transform in the complex plane. By requiring the Laplace transform to be analytic on some
problem dependent complex half-plane, these modes can be
suppressed. The resulting algorithm computes a finite number of coefficients of a series expansion of the Laplace transform, thereby providing an approximation to the exact boundary condition. The resulting error decays super-algebraically with the number of coefficients, so relatively few additional degrees of freedom are
sufficient to reduce the error to the level of the discretization error in the interior of the computational domain. The approach shows good results for the Schroedinger and the drift-diffusion equation
but, in contrast to the one-dimensional case, exhibits instabilities for the wave and Klein-Gordon equation. Numerical examples are shown that demonstrate the good performance in the former and the instabilities in the latter case.
We present a time-dependent finite element model of the human knee joint of full 3D geometric complexity together with advanced numerical algorithms needed for its simulation. The model comprises bones, cartilage and the major ligaments, while patella and menisci are still missing. Bones are modeled by linear elastic materials, cartilage by linear viscoelastic materials, and ligaments by one-dimensional nonlinear Cosserat rods. In order to capture the dynamical contact problems correctly, we solve the full PDEs of elasticity with strict contact inequalities. The spatio--temporal discretization follows a time layers approach (first time, then space discretization). For the time discretization of the elastic and viscoelastic parts we use a new contact-stabilized Newmark method, while for the Cosserat rods we choose an energy--momentum method. For the space discretization, we use linear finite elements for the elastic and viscoelastic parts and novel geodesic finite elements for the Cosserat rods. The coupled system is solved by a Dirichlet--Neumann method. The large algebraic systems of the bone--cartilage contact problems are solved efficiently by the truncated non-smooth Newton multigrid method.
When simulating isolated resonators, the application of transparent boundary conditions causes the approximated spectrum to be polluted with spurious solutions. Distinguishing these artificial solutions from solutions with a physical meaning is often difficult and requires a priori knowledge of the spectrum or the expected field distribution of resonant states. We present an implementation of the pole condition that distinguishes between incoming and outgoing waves by the location of the poles of their Laplace transform as transparent boundary condition. This implementation depends on one tuning parameter. We will use the sensitivity of the computed solutions to perturbations of this parameter as a means to identify spurious solutions. To obtain global statements, we will combine this technique with a convergence monitor for the boundary condition.
The mixed regularity of electronic wave functions in fractional order and weighted Sobolev spaces
(2012)
The paper continues the study of the regularity of electronic wave functions in Hilbert spaces of mixed derivatives. It is shown that the eigenfunctions of electronic Schr\"odinger operators and their
exponentially weighted counterparts possess, roughly speaking, square integrable mixed weak derivatives of fractional order $\vartheta$ for $\vartheta<3/4$. The bound $3/4$ is best possible and can neither be reached nor surpassed. Such results are important for the study
of sparse grid-like expansions of the wave functions and show that their asymptotic convergence rate measured in terms of the number of ansatz functions involved does not deteriorate with the number of electrons.
The hypergraph assignment problem (HAP) is the generalization of assignments
from directed graphs to directed hypergraphs. It serves, in particular,
as a universal tool to model several train composition rules in vehicle rotation
planning for long distance passenger railways. We prove that even for problems
with a small hyperarc size and hypergraphs with a special partitioned structure
the HAP is NP-hard and APX-hard. Further, we present an extended integer
linear programming formulation which implies, e. g., all clique inequalities.
Actin is a major structural protein of the eukaryotic cytoskeleton and enables cell motility.
Here, we present a model of the actin filament (F-actin) that incorporates the global structure
of the recently published model by Oda et al. but also conserves internal stereochemistry. A
comparison is made using molecular dynamics simulation of the model with other recent F-
actin models. A number of structural determents such as the protomer propeller angle, the
number of hydrogen bonds and the structural variation among the protomers are analyzed.
The MD comparison is found to reflect the evolution in quality of actin models over the last
six years. In addition, simulations of the model are carried out in states with both ADP or
ATP bound and local hydrogen-bonding differences characterized. The results point to the
significance of a direct interaction of Gln137 with ATP for activation of ATPase activity after
the G-to-F-actin transition.
We consider convex optimization problems with $k$th order stochastic dominance constraints for $k\ge 2$. We discuss distances of random variables that are relevant for the dominance relation and establish quantitative stability results for optimal values and solution sets in terms of a suitably selected probability metrics.Moreover, we provide conditions ensuring that the optimal value function is Hadamard directionally differentiable. Finally, we discuss some implications of the results for empirical (Monte Carlo,
sample average) approximations of dominance constrained optimization models.
Some mathematical problems related to the 2nd order optimal shape of a crystallization interface
(2012)
We consider the problem to optimize the stationary temperature distribution and the equilibrium shape of the solid-liquid interface in a two-phase system subject to a temperature gradient. The interface satisfies the minimization principle of the free energy, while the temperature is solving the heat equation with a radiation boundary conditions at the outer wall. Under the condition that the temperature gradient is uniformly negative in the direction of crystallization, the interface is expected to have a global graph representation. We reformulate this condition as a pointwise constraint on the gradient of the state, and we derive the first order optimality system for a class of objective functionals that account for the second surface derivatives, and for the surface temperature gradient.
Particle methods have become indispensible in conformation dynamics to compute transition rates in protein folding, binding processes and molecular design, to mention a few. Conformation dynamics requires at a decomposition of a molecule's position space into metastable conformations. In this paper, we show how this decomposition can be obtained via the design of either ``soft'' or ``hard'' molecular conformations. We show, that the soft approach results in a larger metastabilitiy of the decomposition and is thus more advantegous. This is illustrated by a simulation of Alanine Dipeptide.
We characterize the Smith form of skew-symmetric matrix polynomials
over an arbitrary field $\F$,
showing that all elementary divisors occur with even multiplicity.
Restricting the class of equivalence transformations to unimodular congruences,
a Smith-like skew-symmetric canonical form
for skew-symmetric matrix polynomials is also obtained.
These results are used to analyze the eigenvalue and elementary divisor structure
of matrices expressible as products of two skew-symmetric matrices,
as well as the existence of structured linearizations
for skew-symmetric matrix polynomials.
By contrast with other classes of structured matrix polynomials
(e.g., alternating or palindromic polynomials),
every regular skew-symmetric matrix polynomial
is shown to have a structured strong linearization.
While there are singular skew-symmetric polynomials of even degree
for which a structured linearization is impossible,
for each odd degree we develop a skew-symmetric companion form
that uniformly provides a structured linearization
for every regular and singular skew-symmetric polynomial
of that degree.
Finally, the results are applied to the construction of minimal
symmetric factorizations of skew-symmetric rational matrices.
Cubature methods, a powerful alternative to Monte Carlo due to Kusuoka [Adv. Math. Econ. 6, 69–83, 2004] and Lyons–Victoir [Proc. R. Soc. Lond. Ser. A 460, 169–198, 2004], involve the solution to numerous auxiliary ordinary differential equations. With focus on the Ninomiya-Victoir algorithm [Appl. Math. Fin. 15, 107–121, 2008], which corresponds to a concrete level 5 cubature method, we study some parametric diffusion models motivated from financial applications, and exhibit structural conditions under which all involved ODEs can be solved explicitly and efficiently. We then enlarge the class of models for which this technique applies, by introducing a (model-dependent) variation of the Ninomiya-Victoir method. Our method remains easy to implement; numerical examples illustrate the savings in computation time.
We consider risk-averse formulations of multistage stochastic linear programs. For these formulations, based on convex combinations of spectral risk measures, risk-averse dynamic programming equations can be written. As a result, the Stochastic Dual Dynamic Programming
(SDDP) algorithm can be used to obtain approximations of
the corresponding risk-averse recourse functions. This allows us to define a risk-averse nonanticipative feasible policy for thestochastic linear program. Formulas for the cuts that approximate the recourse functions are given.
Scalable Frames
(2012)
Tight frames can be characterized as those frames which possess optimal numerical stability properties. In this paper, we consider the question of modifying a general frame to generate a tight frame by rescaling its frame vectors; a process which can also be regarded as perfect preconditioning of a frame by a diagonal operator. A frame is called scalable, if such a diagonal operator exists. We derive various characterizations of scalable frames, thereby including the infinite-dimensional situation. Finally, we provide a geometric interpretation of scalability in terms of conical surfaces.
Primal heuristics are an important component of state-of-the-art codes for mixed integer programming. In this paper, we focus on primal heuristics that only employ computationally inexpensive procedures such as rounding and logical deductions (propagation). We give an overview of eight different approaches. To assess the impact of these primal heuristics on the ability to find feasible solutions, in particular early during search, we introduce a new performance measure, the primal integral. Computational experiments evaluate this and other measures on MIPLIB~2010 benchmark instances.
Traffic in communication networks fluctuates heavily over time. Thus, to avoid capacity bottlenecks,
operators highly overestimate the traffic volume during network planning. In this paper we consider
telecommunication network design under traffic uncertainty, adapting the robust optimization approach
of Bertsimas and Sim [21]. We present three different mathematical formulations for this problem, provide valid inequalities,
study the computational implications, and evaluate the realized robustness.
To enhance the performance of the mixed-integer programming solver we derive robust cutset inequalities generalizing their deterministic counterparts. Instead of a single cutset inequality for every
network cut, we derive multiple valid inequalities by exploiting the extra variables available in the robust formulations. We show that these inequalities define facets under certain conditions and that they
completely describe a projection of the robust cutset polyhedron if the cutset consists of a single edge.
For realistic networks and live traffic measurements we compare the formulations and report on the
speed up by the valid inequalities. We study the “price of robustness” and evaluate the approach by
analyzing the real network load. The results show that the robust optimization approach has the potential
to support network planners better than present methods.
We consider a sorting problem from railway optimization
called train classification: incoming trains are split up into their single
cars and reassembled to form new outgoing trains. Trains are subject
to delay, which may turn a prepared sorting schedule infeasible for the
disturbed situation. The classification methods applied today deal with
this issue by completely disregarding the input order of cars, which provides
robustness against any amount of disturbance but also wastes the
potential contained in the a priori knowledge about the input.
We introduce a new method that provides a feasible sorting schedule for
the expected input and allows to
flexibly insert additional sorting steps
if the schedule has become infeasible after revealing the disturbed input.
By excluding disruptions that almost never occur from our consideration,
we obtain a classification process that is quicker than the current railway
practice but still provides robustness against realistic delays. In fact, our
algorithm allows
flexibly trading off fast classification against high degrees
of robustness depending on the respective need. We further explore
this
flexibility in experiments on real-world traffic data, underlining our
algorithm improves on the methods currently applied in practice.
We consider the problem of numerical approximation for forward-backward stochastic
differential equations with drivers of quadratic growth (qgFBSDE). To illustrate the significance
of qgFBSDE, we discuss a problem of cross hedging of an insurance related financial
derivative using correlated assets. For the convergence of numerical approximation schemes for
such systems of stochastic equations, path regularity of the solution processes is instrumental.
We present a method based on the truncation of the driver, and explicitly exhibit error estimates
as functions of the truncation height. We discuss a reduction method to FBSDE with globally
Lipschitz continuous drivers, by using the Cole-Hopf exponential transformation. We finally
illustrate our numerical approximation methods by giving simulations for prices and optimal
hedges of simple insurance derivatives.
Resolving the apparent gap in complexity between
simulated and measured kinetics of biomolecules
(2012)
Molecular simulations of biomolecules often reveal a complex picture of the their kinetics,
whereas kinetic experiments typically seem to indicate considerably simpler two- or three-state
kinetics. Markov state models (MSM) provide a tool to link between simulation and experi-
ment, and to resolve this apparent contradiction.
RENS – the optimal rounding
(2012)
This article introduces RENS, the relaxation enforced neighborhood search, a large neighborhood search algorithm for mixed integer nonlinear programming (MINLP) that uses a sub-MINLP to explore the set of feasible roundings of an optimal solution x' of a linear or nonlinear relaxation. The sub-MINLP is constructed by fixing integer variables x_j with x'_j in Z and bounding the remaining integer variables to x_j in {floor(x'_j), ceil(x'_j)}. We describe two different applications of RENS: as a standalone algorithm to compute an optimal rounding of the given starting solution and as a primal heuristic inside a complete MINLP solver.
We use the former to compare different kinds of relaxations and the impact of cutting planes on the roundability of the corresponding optimal solutions. We further utilize RENS to analyze the performance of three rounding heuristics implemented in the branch-cut-and-price framework SCIP. Finally, we study the impact of RENS when it is applied as a primal heuristic inside SCIP.
All experiments were performed on three publically available test sets of mixed integer linear programs (MIPs), mixed integer quadratically constrained programs (MIQCPs), and MINLPs, using solely software which is available in source code.
It turns out that for these problem classes 60% to 70% of the instances have roundable relaxation optima and that the success rate of RENS does not depend on the percentage of fractional variables. Last but not least, RENS applied as primal heuristic complements nicely with existing root node heuristics in SCIP and improves the overall performance.
In this paper we investigate two different recoverable robust models to deal with cost uncertainties in a shortest path problem. Recoverable robustness extends the classical concept of robustness to deal with uncertainties by incorporating limited recovery actions after the
full data are revealed. Our first model focuses on the case where the recovery actions are quite restricted: after a simple path is fixed in the first stage, in the second stage, after all data are revealed, any path containing at most k new arcs may be chosen.
Thus, the parameter k can be interpreted as a mediator between
robust optimization - no changes allowed - and optimization
on the fly - an arbitrary solution can be chosen. Considering three
classical scenario sets, which model uncertainties in the cost function,
we show that this new problem is strongly NP-hard in all
these cases and is not approximable, unless P=NP.
This is in contrast to the robust shortest path problem, where, for
example, an optimal solution can be computed efficiently for interval
and Gamma-scenarios. For series-parallel graphs and interval scenarios,
we present a polynomial time algorithm for this recoverable robust
setting.
In our second model the recovery set, i.e., the set of paths selectable
in the second stage is not limited, but deviating from the previous
choice comes at extra cost. Thus, a path chosen in the first stage
produces renting costs modeled as an alpha-fraction of the scenario
cost. For an arc taken in the second stage the remaining cost needs
to be paid in addition to some extra inflation cost modeled by a beta-fraction
of the scenario cost, if the arc was not reserved beforehand. The
complexity status of this problem is similar to the robust case. Yet,
for Gamma-scenarios the problem is again strongly NP-hard,
but can be approximated.
The knapsack problem is one of the basic problems in combinatorial optimization. In real-world applications it is often part of a more complex problem. Examples are machine capacities in production planning or bandwidth restrictions in telecommunication network design. Due to unpredictable future settings or erroneous data, parameters of such a subproblem are subject to uncertainties.
In high risk situations a robust approach should be chosen to deal with these uncertainties.
Unfortunately, classical robust optimization outputs solutions with little profit by prohibiting any adaption of the solution when the actual realization of the uncertain parameters is known.
This ignores the fact that in most settings minor changes to a previously determined solution are possible. To overcome these drawbacks we allow a limited recovery of a previously fixed item set as soon as the data are known by deleting at most k items and adding up to l new items.
We consider the complexity status of this recoverable robust knapsack problem and extend the classical concept of cover inequalities to obtain stronger polyhedral descriptions. Finally, we present two extensive computational studies to investigate the influence of parameters k and l to the objective and evaluate the effectiveness of our new class of valid inequalities.
In this paper, we investigate the recoverable robust knapsack problem,
where the uncertainty of the item weights follows the approach of Bertsimas and
Sim. In contrast to the robust approach, a limited recovery action is allowed,
i.e., up to k items may be removed when the actual weights are known. This problem
is motivated by the assignment of traffic nodes to antennas in wireless network
planning. Starting from an exponential min-max optimization model, we derive an
integer linear programming formulation of quadratic size. In a preliminary computational
study, we evaluate the gain of recovery using realistic planning data.
A robust implementation of a Dupire type local volatility model is an important issue for every option trading floor. In the present note we provide new analytic insights into the asymptotic behavior of local volatility in the wings. We present a general approximation formula and specialize it to the Heston model, showing that local variance is linear in the wings. This further justifies the choice of certain local volatility parametrizations.
Learning during search allows solvers for discrete optimization problems to remember parts of the search that they have already performed and avoid revisiting redundant parts. Learning approaches pioneered by the SAT and CP communities have been successfully incorporated into the SCIP constraint integer programming platform. In this paper we show that performing a heuristic constraint programming search during root node processing of a binary program can rapidly learn useful nogoods, bound changes, primal solutions, and branching statistics that improve the remaining IP search.
Rapid Branching
(2012)
We propose rapid branching (RB) as a general branch-and-bound heuristic for solving large scale optimization problems in traffic and transport. The key idea is to combine a special branching rule and a greedy node selection strategy in order to produce solutions of controlled quality rapidly and efficiently. We report on three successful applications of the method for integrated vehicle and crew scheduling, railway track allocation, and railway vehicle rotation planning.
The track allocation problem, also known as train routing problem or train timetabling problem, is to find a conflict-free set of train routes of maximum value in a railway network. Although it can be modeled as a standard path packing problem, instances of sizes relevant for real-world railway applications could not be solved up to now. We propose a rapid branching column generation approach that integrates the solution of the LP relaxation of a path coupling formulation of the problem with a special rounding heuristic. The approach is based on and exploits special properties of the bundle method for the approximate solution of convex piecewise linear functions. Computational results for difficult instances of the benchmark library TTPLIB are reported.
Today the railway timetabling process and the track allocation is one of the most challenging problems to solve by a railway company. Especially due to the deregulation of the transport market in the recent years several suppliers of railway traffic have entered the market in Europe. This leads to more potential conflicts between trains caused by an increasing demand of train paths. Planning and operating railway transportation systems is extremely hard due to the combinatorial complexity of the underlying discrete optimization problems, the technical intricacies, and the immense size of the problem instances. In order to make best use of the infrastructure and to ensure economic operation, efficient planning of the railway operation is indispensable. Mathematical optimization models and algorithms can help to automatize and tackle these challenges. Our contribution in this paper is to present a renewed planning process due to the liberalization in Europe and an associated concept for track allocation, that consists of three important parts, simulation, aggregation, and optimization. Furthermore, we present results of our general framework for real world data.
We consider the solution of a system of stochastic generalized equations (SGE) where the underlying functions are mathematical expectation of random set-valued mappings. SGE has many applications such as characterizing optimality conditions of a nonsmooth stochastic optimization problem and a stochastic equilibrium problem. We derive quantitative continuity of expected value of the set-valued mapping with respect to the variation of the underlying
probability measure in a metric space. This leads to the subsequent qualitative and quantitative stability analysis of solution set mappings of the SGE. Under some metric regularity conditions, we derive Aubin's property of the solution set mapping with respect to the change of probability measure. The established results are
applied to stability analysis of stationary points of classical one stage and two stage stochastic minimization problems, two stage stochastic mathematical programs with equilibrium constraints and stochastic programs with second order dominance constraints.
Markov (state) models (MSMs) have attracted a lot of interest recently as they (1) can probe
long-term molecular kinetics based on short-time simulations, (2) offer a way to analyze great
amounts of simulation data with relatively little subjectivity of the analyst, (3) provide insight into
microscopic quantities such as the ensemble of transition pathways, and (4) allow simulation data
to be reconciled with measurement data in a rigorous and explicit way. Here we sketch our current
perspective of Markov models and explain in short their theoretical basis and assumptions. We
describe transition path theory which allows the entire ensemble of protein folding pathways to be
investigated and that combines naturally with Markov models. Experimental observations can be
naturally linked to Markov models with the dynamical fingerprint theory, by which experimentally
observable timescales can be equipped with an understanding of the structural rearrangement
processes that take place at these timescales. The concepts of this paper are illustrated by a
simple kinetic model of protein folding.
Discrete-state Markov (or master equation) models provide a useful simplified representation for
characterizing the long-time statistical evolution of biomolecules in a manner that allows direct
comparison with experiments as well as the elucidation of mechanistic pathways for an inherently
stochastic process. A vital part of meaningful comparison with experiment is the characterization of
the statistical uncertainty in the predicted experimental measurement, which may take the form of
an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following
a perturbation, or the observation of some statistic (such as the correlation function) of the equilib-
rium dynamics of a single molecule. Without meaningful error bars (which arise due to the finite
quantity of data used to construct the model), there is no way to determine whether the deviations
between model and experiment are statistically meaningful. Previous work has demonstrated that
a Bayesian method that enforces microscopic reversibility can be used to characterize the correlated
uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from
molecular simulation data. Here, we extend this approach to include the uncertainty in observables
that are functions of molecular conformation (such as surrogate spectroscopic signals) characteriz-
ing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be
assessed. We test the approach in a simple model system to demonstrate that the computed uncer-
tainties provide a useful indictor of statistical variation, and then apply it to the computation of the
fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both
experiment and simulation.
Large-scale stochastic models are relevant in many different fields such as com- putational biology, finance, social sciences, communication and traffic networks. In order to both efficiently simulate and analyze such models and to understand the essential properties of the sys- tem, it is desirable to have model reduction techniques that much reduce the dimensionality of the model while at the same time preserving the system’s essential dynamical properties. In this paper, a general model reduction technique for the class of discrete space and time Hidden Markov Models is presented, thereby also including the more special class discrete Markov Chains. The method is illustrated on some model applications.
Primal-dual linear Monte Carlo algorithm for multiple stopping - An application to flexible caps
(2012)
In this paper we consider the valuation of Bermudan callable derivatives with
multiple exercise rights. We present in this context a new primal-dual linear
Monte Carlo algorithm that allows for ecient simulation of lower and upper price
bounds without using nested simulations (hence the terminology). The algorithm
is essentially an extension of a primal{dual Monte Carlo algorithm for standard
Bermudan options proposed in Schoenmakers et al. (2011), to the case of multiple
exercise rights. In particular, the algorithm constructs upwardly a system of dual
martingales to be plugged into the dual representation of Schoenmakers (2010).
At each level the respective martingale is constructed via a backward regression
procedure starting at the last exercise date. The thus constructed martingales are
nally used to compute an upper price bound. At the same time, the algorithm
also provides approximate continuation functions which may be used to construct
a price lower bound. The algorithm is applied to the pricing of
exible caps
in a Hull and White (1990) model setup. The simple model choice allows for
comparison of the computed price bounds with the exact price which is obtained
by means of a trinomial tree implementation. As a result, we obtain tight price
bounds for the considered application. Moreover, the algorithm is generically
designed for multi-dimensional problems and is tractable to implement.
We introduce an algorithm for
diffusion weighted magnetic resonance imaging data enhancement based on structural adaptive smoothing in both space and diffusion direction.
The method, called POAS, does not refer to a specific model for the data, like the diffusion tensor or higher order models.
It works by embedding the measurement space into a space with defined metric and group operations, in this case the Lie group of three-dimensional Euclidean motion SE(3).
Subsequently, pairwise comparisons of the values of the diffusion
weighted signal are used for adaptation.
The position-orientation adaptive smoothing preserves the edges of the observed fine and anisotropic structures.
The POAS-algorithm is designed to reduce noise directly in the diffusion weighted images and consequently also to reduce bias and
variability of quantities derived from the data for specific models.
We evaluate the algorithm on simulated and experimental data and demonstrate that it can be used to reduce the number of applied diffusion gradients and
hence acquisition time while achieving similar quality of data, or to improve the quality of data acquired in a clinically feasible scan time setting.
Persistence of rogue waves in extended nonlinear Schrödinger equations: Integrable Sasa-Satsuma case
(2012)
We present the lowest order rogue wave solution of the Sasa-Satsuma equation (SSE) which is one of the integrable extensions of the nonlinear Schrödinger equation (NLSE). In contrast to the Peregrine solution of the NLSE, it is significantly more involved and contains polynomials of fourth order rather than second order in the corresponding expressions. The correct limiting case of Peregrine solution appears when the extension parameter of the SSE is reduced to zero.
This paper is concerned with a PDE-constrained optimization problem of induction heating, where the state equations consist of 3D time--dependent heat equations coupled with 3D time--harmonic eddy current equations. The control parameters are given by finite real numbers representing applied alternating voltages which enter the eddy current equations via impressed current. The optimization problem is to find optimal voltages so that, under certain constraints on the voltages and the temperature, a desired temperature can be optimally achieved. As there are finitely many control parameters but the state constraint has to be satisfied in an infinite number of points, the problem belongs to a class of semi--infinite programming problems. We present a rigorous analysis of the optimization problem and a numerical strategy based on our theoretical result.
We consider backward stochastic differential equations with drivers of quadratic growth (qgBSDE). We prove several statements concerning path regularity and stochastic smooth-
ness of the solution processes of the qgBSDE, in particular we prove an extension of Zhang's path regularity theorem to the quadratic growth setting. We give explicit convergence rates for the difference between the solution of a qgBSDE and its truncation, filling an important gap in numerics for qgBSDE. We give an alternative proof of second order Malliavin differentiability for BSDE with drivers that are Lipschitz continuous (and differentiable), and
then derive the same result for qgBSDE.
In this paper we formulate a boundary layer approximation
for an Allen--Cahn-type equation involving a small parameter $\eps$. Here, $\eps$ is related to the thickness of the boundary layer and we are interested in the limit when $\eps$ tends to $0$ in order to derive nontrivial boundary conditions. The evolution of the system is written as an energy balance formulation of the L^2-gradient flow with the corresponding Allen--Cahn energy functional. By transforming the boundary layer to a fixed domain we show the convergence of the solutions to a solution of a limit system. This is done by using concepts related to Gamma- and Mosco convergence. By considering different scalings in the boundary layer we obtain different boundary conditions.
Mixed integer programming (MIP) has become one of the most important techniques in Operations Research and Discrete Optimization. SCIP (Solving Constraint Integer Programs) is currently one of the fastest non-commercial MIP solvers. It is based on the branch-and-bound procedure in which the problem is recursively split into smaller subproblems, thereby creating a so-called branching tree. We present ParaSCIP, an extension of SCIP, which realizes a parallelization on a distributed memory computing environment. ParaSCIP uses SCIP solvers as independently running processes to solve subproblems (nodes of the branching tree) locally. This makes the parallelization development independent of the SCIP development. Thus, ParaSCIP directly profits from any algorithmic progress in future versions of SCIP. Using a first implementation of ParaSCIP, we were able to solve two previously unsolved instances from MIPLIB2003, a standard test set library for MIP solvers. For these computations, we used up to 2048 cores of the HLRN~II supercomputer.
The paper examines the applicability of mathematical programming methods to the simultaneous optimization of the structure and the operational parameters of a combined-cycle-based cogeneration plant. The optimization problem is formulated as a nonconvex mixed-integer nonlinear problem (MINLP) and solved by the MINLP solver LaGO. The algorithm generates a convex relaxation of the MINLP and applies a Branch and Cut algorithm to the relaxation. Numerical results for different demands for electric power and process steam are discussed and a sensitivity analysis is performed.
Im Zuge der Übernahme von 6 Linien der Havelbus Verkehrsgesellschaft mbH durch die ViP Verkehr in Potsdam GmbH ergab sich 2009 die Notwendigkeit der Entwicklung eines neuen Linien- und Taktplans für das Jahr 2010. Das Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) entwickelt in einem Projekt des DFG-Forschungszentrums Matheon ein Verfahren zur mathematischen Linienoptimierung. Dieses Tool wurde bei der Optimierung des ViP Linienplans 2010 in einer projektbegleitenden Studie eingesetzt, um Alternativen bei verschiedenen Planungs- und Zielvorgaben auszuloten. In dem Artikel wird eine Auswertung der Ergebnisse mit dem Verkehrsanalysesystem Visum der PTV AG beschrieben. Die Auswertungen bestätigen, dass mit Hilfe von mathematischer Optimierung eine weitere Verkürzung der Reisezeit um 1%, eine als um 6% verkürzt empfundene Reisezeit, 10% weniger Fahrzeit im Fahrzeug und eine gleichzeitige Kostenreduktion um 5% möglich sind
We discuss shape optimization problems for cylindrical tubes that are loaded by time-dependent applied force. This is a problem of shape optimization that leads to optimal control in linear elasticity theory. We determine the optimal thickness of a cylindrical tube minimizing the deformation of the tube under the influence of the external force. The main difficulty is that the state equation is a hyperbolic partial differential equation of 4th order. First order necessary conditions for the optimal solution are derived. Based on them, a numerical method is set up and numerical examples are presented.
Optimal Thickness of a Cylindrical Shell -- An Optimal Control Problem in Linear Elasticity Theory
(2012)
In this paper we discuss optimization problems for cylindrical tubes which are loaded by an applied force. This is a problem of optimal control in linear elasticity theory (shape optimization). We are looking for an optimal thickness minimizing the deflection (deformation) of the tube under the influence of an external force.
From basic equations of mechanics, we derive the equation of deformation. We apply the displacement approach from shell theory and make use of the hypotheses of
Mindlin and Reissner. A corresponding optimal control problem is formulated and first order necessary conditions for the optimal solution (optimal thickness) are derived.
We present numerical examples which were solved by the finite element method.