Refine
Year of publication
Language
- English (1103) (remove)
Keywords
- optimal control (27)
- stability (14)
- integer programming (11)
- Stochastic programming (9)
- finite elements (9)
- mixed integer programming (9)
- Hamiltonian matrix (8)
- finite element method (8)
- model reduction (8)
- state constraints (8)
We present a formal procedure for structure-preserving model reduction of linear second-order control problems that appear in a variety of physical contexts, e.g., vibromechanical systems or electrical circuit design. Typical balanced truncation methods that project onto the subspace of the largest Hankel singular values fail to preserve the problem's physical structure and may suffer from lack of stability. In this paper we adopt the framework of generalized Hamiltonian systems that covers the class of relevant problems and that allows for a generalization of balanced truncation to second-order problems.
It turns out that the Hamiltonian structure, stability and passivity are preserved if the truncation is done by imposing a holonomic constraint on the system rather than standard Galerkin projection.
The Steiner connectivity problem is a generalization of
the Steiner tree problem. It consists in finding a minimum cost set of
simple paths to connect a subset of nodes in an undirected graph.
We show that important polyhedral and algorithmic results on the
Steiner tree problem carry over to the Steiner connectivity problem,
namely, the Steiner cut and the Steiner partition inequalities, as
well as the associated polynomial time separation algorithms, can be
generalized. Similar to the Steiner tree case, a certain directed
formulation, which is stronger than the natural undirected one,
plays a central role.
For many fundamental cooperative cost sharing games, especially when costs are supermodular, it is known that Moulin mechanisms inevitably suffer from poor budget balance factors. Mehta, Roughgarden, and Sundararajan recently introduced acyclic mechanisms, which achieve a slightly weaker notion of group-strategyproofness, but leave more flexibility to improve upon the approximation guarantees with respect to budget balance and social cost.
In this paper, we provide a very simple but powerful method for turning any rho-approximation algorithm for a combinatorial optimization problem into a rho-budget balanced acyclic mechanism. Hence, we show that there is no gap between the best possible approximation guarantees of full-knowledge approximation algorithms and weakly group-strategyproof cost sharing mechanisms.
The applicability of our method is demonstrated by deriving mechanisms for scheduling and network design problems which beat the best possible budget balance factors of Moulin mechanisms. By elaborating our framework, we provide means to construct weakly group-strategyproof mechanisms with approximate social cost. The mechanisms we develop for completion time scheduling problems perform surprisingly well by achieving the first constant budget balance and social cost factors.
When managing energy or weather related risk often only imperfect hedging instruments are available. In the first part we illustrate problems arising with imperfect hedging by studying a toy model. We consider an airline’s problem with covering income risk due to fluctuating kerosene prices by investing into futures written on heating oil with closely correlated price dynamics. In the second part we outline recent results on exponential utility based cross hedging concepts. They highlight in a generalization of the Black-Scholes delta hedge formula to incomplete markets. Its derivation is based on a purely stochastic approach of utility maximization. It interprets stochastic control problems in the BSDE language, and profits from the power of the stochastic calculus of variations.
We consider backward stochastic differential equations with drivers of quadratic growth (qgBSDE). We prove several statements concerning path regularity and stochastic smoothness of the solution processes of the qgBSDE, in particular we prove an extension of Zhang's path regularity theorem to the quadratic growth setting. We give explicit convergence rates for the difference between the solution of a qgBSDE and its truncation, filling an important gap in numerics for qgBSDE. We give an alternative proof of second order Malliavin differentiability for BSDE with drivers that are Lipschitz continuous (and differentiable), and then derive the same result for qgBSDE.
We show that the spectrum of linear delay differential equations with
large delay splits into two different parts. One part, called the
strong spectrum, converges to isolated points when the delay parameter
tends to infinity. The other part, called the pseudocontinuous spectrum,
accumulates near criticality and converges after rescaling to a set
of spectral curves, called the asymptotic continuous spectrum. We
show that the spectral curves and strong spectral points provide a
complete description of the spectrum for sufficiently large delay
and can be comparatively easily calculated by approximating expressions.
Local existence, uniqueness and smooth dependence for nonsmooth quasilinear parabolic problems
(2009)
We prove local existence, uniqueness, Hölder regularity in space and time, and smooth dependence in Hölder spaces for a general class of quasilinear parabolic initial boundary value problems with nonsmooth data.
As a result the gap between low smoothness of the data, which is typical for
many applications, and high smoothness of the solutions, which is necessary
for the applicability of differential calculus to abstract formulations of the
initial boundary value problems, has been closed. The theory works for any
space dimension, and the nonlinearities are allowed to be nonlocal and to
have any growth. The main tools are new maximal regularity results [19, 20]
in Sobolev–Morrey spaces for linear parabolic initial boundary value problems
with nonsmooth data, linearization techniques and the Implicit Function Theorem.
In this paper we study a certain cardinality constrained packing integer program which is motivated by the problem of dimensioning a cut in a two-layer network. We prove NP-hardness and consider the facial structure of the corresponding polytope. We provide a complete description for the smallest nontrivial case and develop two general classes of facet-defining inequalities. This approach extends the
notion of the well known cutset inequalities to two network layers.
In this paper, we present a model-based optimization approach for the design of multi-layer networks. The proposed framework is based on a series of increasingly abstract models – from a general technical system model to a problem specific mathematical model – which are used in a planning cycle to optimize the multi-layer networks. In a case study we show how central design questions for an IP-over-WDM network architecture can be answered
using this approach. Based on reference networks from the German research project EIBONE, we investigate the influence of various planning parameters on the total design cost. This includes a comparison of point-to-point vs. transparent optical layer architectures, different traffic distributions, and the use of PoS vs. Ethernet interfaces.
We study superhedging of contingent claims with physical delivery in a discrete-time market model with convex transaction costs. Our model extends Kabanov's currency market model by allowing for nonlinear illiquidity effects. We show that an appropriate generalization of Schachermayer's robust no arbitrage condition implies that the set of claims hedgeable with zero cost is closed in probability. Combined with classical techniques of convex analysis, the closedness yields a dual characterization of premium processes that are sufficient to superhedge a given claim process. We also extend the fundamental theorem of asset pricing for general conical models.
Large-scale maintenance in industrial plants requires the entire shutdown of production
units for disassembly, comprehensive inspection and renewal. It is an important process but causes high out-of-service cost. Therefore a good schedule for a shutdown and and an analysis of possible associated risks are crucial for the manufacturer.
We derive models and algorithms for shutdown scheduling that include different features
such as time-cost tradeoff, precedence constraints, hiring external resources, resource leveling, different working shifts, and risk analysis. Our experimental results show that our methods solve large real-world instances very fast and yield an excellent resource utilization. A comparison with solutions of a mixed integer program on smaller instances proves the high quality
of the schedules that our algorithms produce within a few minutes.
Our algorithms work in two phases. The first phase supports the manager in finding a
good makespan for the shutdown. It computes an approximate project time cost tradeoff
curve together with a stochastic evaluation of the risk for meeting a particular makespan t. Our risk measures are the expected tardiness at time t and the probability of completing the shutdown within time t. In the second, detailed planning phase, we solve the actual scheduling optimization problem for the makespan chosen in the first phase heuristically and compute a detailed schedule that respects all side constraints. Again, we complement this by computing
upper bounds for the same two risk measures, but now for the detailed schedule. The shutdown problem has many relationships with well established areas of scheduling, and we also give an overview on the large variety of scheduling problems involved.
We introduce a new technique for solving several sequencing problems. We consider Gilmore and Gomory's variant of the Traveling Salesman Problem and two variants of no-wait flowshop scheduling, the classical makespan minimization problem and a new problem arising in the multistage production process in steel manufacturing.
Our technique is based on an intuitive interpretation of sequencing problems as Eulerian Extension Problems. This view reveals new structural insights and leads to elegant and simple algorithms and proofs for this ancient type of problems. As a major effect, we compute not only a single solution; instead, we represent the entire space of optimal solutions. For the new flowshop scheduling problem we give a full complexity classification for any machine configuration.
We consider backward stochastic differential equations (BSDE) with nonlinear generators typically of quadratic growth in the control variable. A measure solution of such a BSDE will be understood as a probability measure under which the generator is seen as vanishing, so that the classical solution can be reconstructed by a combination of the operations of conditioning and using martingale representations. In case the terminal condition ist bounded and the generator fulfills the usual continuity and boundedness conditions, we show the measure solutions with equivalent measures just reinterpret classical ones. In case of terminal conditions that have only exponentially bounded moments, we discuss a series of examples which show that in cas of non-uniqueness classical solutions that fail to be measure solutions can coexists with different measure solution.
We solve Skorokhod's embedding problem for Brownian mostion with linear drift $(W_t + \kappa t)_{t\ge 0}$ by means of techniques of stochastic control theory. The search for a stopping time $T$ such that the law of $W_T + \kappa T$ coincides with a prescribed law $\mu$ processing the first moment is based on solutions of backward stochastic differential equations of quadratic type. Theis new approach generalizes an approach by Bass [BAS] of the classical version of Skorokhod's embedding problem using martingale representation techniques.
Financial markets with asymmetric information: information drift, additional utility and entropy
(2009)
We review a general mathematical link between utility and information theory appearing in a simple financial market model with two kinds of small investors: insiders, whose extra information is stored in an enlargement of the less informed agents' filtration. The insider's expected logarithmic utility increment is described in terms of the information drift, i.e. the drift one has to eliminate in order to perceive the price dynamics as a martingale from his perspective. We describe the information drift in a very general setting by natural quantities expressing the conditional laws of the better informed view of the world. This on th other hand allows to identify the additional utility by entropy related quantities known from information theory.
We deal with backward stochastic differential equations with time delayed generators. In this new type of equations, a generator at time t can depend on the values of a solution in the past, weighted with a time delay function for instance of the moving average type. We prove existence and uniqueness of a solution for a sufficiently small time horizon or for a sufficiently small Lipschitz constant of a generator. We give examples of BSDE with time delayed generators that have multiple solutions or that have no solutions. We show for some special class of generators that existence and uniqueness may still hold for an arbitrary time horizon and for arbitrary Lipschitz constant. This class includes linear time delayed generators, which we study in more detail. We are concerned with different properties of a solution of a BSDE with time delayed generator, including the inheritance of boundedness from the terminal condition, the comparison principle, the existence of a measure solution and the BMO martingale property. We give examples in which they may fail.
In this paper we consider a class of BSDE with drivers of quadratic growth, on a stochastic basis generated by continuous local martingales. We first derive the Markov property of a forward-backward system (FBSDE) if the generating martingale is a strong Markov process. Then we establish the differentiability of a FBSDE with respect to the initial value of its forward component. This enables us to obtain the main result of this article which from the perspective of a utility optimization interpretation of the underlying control problem on a financial market takes the following form. The control process of the BSDE steers the system into a random liability depending on a market external uncertainty and this way describes the optimal derivative hedge of the liability by investment in a capital market the dynamics of which is described by the forward component. This delta hedge is described in a key formula in terms of a derivative functional of the solution process and the correlation structure of the internal uncertainty captured by the forward process and the external uncertainty responsible for the market incompleteness. The formula largely extends the scope of validity of the results obtained by several authors in the Brownian setting, designed to give a genuinely stochastic representation of the optimal delta hedge in the context of cross hedging insurance derivatives generalizing the derivative hedge in the Black-Scholes model. Of course, Malliavin’s calculus needed in the Brownian setting is not available in the general local martingale framework. We replace it by new tools based on stochastic calculus techniques.
We investigate solutions of backward stochastic differential equations (BSDE) with time delayed generators driven by Brownian motions and Poisson random measures that constitute the two components of a Lévy process. In this new type of equations, the generator can depend on the past values of a solution, by feeding them back into the dynamics with a time lag. For such time delayed BSDE, we prove existence and uniqueness of solutions provided we restrict on a sufficiently small time horizon or the generator possesses a sufficiently small Lipschitz constant. We study differentiability in the variational or Malliavin sense and derive equations that are satisfied by the Malliavin gradient processes. On the chosen stochastic basis this addresses smoothness both with respect to the continuous part of our Lévy process in terms of the classical Malliavin derivative for Hilbert space valued random variables, as well as with respect to the pure jump component for which it takes the form of an increment quotient operator related to the Picard difference operator.
Good-deal bounds have been introduced as a way to obtain valuation bounds
for derivative assets which are tighter than the arbitrage bounds. This is achieved by ruling out not only those prices that violate no-arbitrage restrictions but also
trading opportunities that are `too good'.
We study dynamic good-deal valuation bounds that are derived from bounds on optimal
expected growth rates. This leads naturally to restrictions on the set of pricing measure which are local in time, thereby inducing good dynamic properties for the good-deal valuation bounds.
We study good-deal bounds by duality arguments in a general semimartingale setting.
In a Wiener space setting where asset prices evolve as It\^o-processes,
good-deal bounds are then conveniently described by backward SDEs.
We show how the good-deal bounds arise as the value function for an
optimal control problem, where a dynamic coherent a priori risk measure is minimized by the choice of a suitable hedging strategy.
This demonstrates how the theory of no-good-deal valuations can be associated to an established concept of dynamic hedging in continuous time.
Zonotopes With Large 2D Cuts
(2009)
Durhuus and Jonsson (1995) introduced the class of “locally constructible” (LC) 3-spheres and showed that there are only exponentially-many combinatorial types of simplicial LC 3-spheres. Such upper bounds are crucial for the convergence of models for 3D quantum gravity.
We characterize the LC property for d-spheres ("the sphere minus a facet collapses to a (d-2)-complex") and for d-balls. In particular, we link it to the classical notions of collapsibility, shellability and constructibility, and obtain hierarchies of such properties for
simplicial balls and spheres. The main corollaries from this study are: (1.) Not all simplicial 3-spheres are locally constructible. (This solves a problem by Durhuus and Jonsson.)
(2.) There are only exponentially many shellable simplicial 3-spheres with given number of facets. (This answers a question by Kalai.)
(3.) All simplicial constructible 3-balls are collapsible. (This answers a question by Hachimori.)
(4.) Not every collapsible 3-ball collapses onto its boundary minus a facet. (This property appears in papers by Chillingworth and Lickorish.)
Modelling incompressible ideal fluids as a finite collection of
vortex filaments is important in physics (super-fluidity, models for the
onset of turbulence) as well as for numerical algorithms used in computer
graphics for the real time simulation of smoke. Here we introduce
a time-discrete evolution equation for arbitrary closed polygons in 3-
space that is a discretisation of the localised induction approximation of
filament motion. This discretisation shares with its continuum limit the
property that it is a completely integrable system. We apply this polygon
evolution to a significant improvement of the numerical algorithms
used in Computer Graphics.
We develop a generic method for constructing a weak static minimum
variance hedge for a wide range of derivatives that may involve optimal exercise features or contingent cash flow streams, to provide a hedge along a
sequence of future hedging dates. The optimal hedge is constructed using
a portfolio of preselected hedge instruments which could be derivatives
with different maturities. The hedge portfolio is weakly static in that
it is initiated at time zero, does not involve intermediate re-balancing,
but hedges may be gradually unwound over time. We study the static
hedging of a convertible bond to demonstrate the method by an example
that involves equity and credit risk. We investigate the robustness of the
hedge performance with respect to parameter and model risk by numerical
experiments.
We introduce an optimization model for the line planning problem in a public transportation system that aims at minimizing operational costs while ensuring a given level of quality of service in terms of available transport capacity. We discuss the computational complexity of the model for tree network topologies and line structures that arise in a real-world application at the Trolebus Integrated System in Quito. Computational results for this system are reported.
The optimization of fare systems in public transit allows to pursue
objectives such as the maximization of demand, revenue, profit, or
social welfare. We propose a non-linear optimization approach to fare
planning that is based on a detailed discrete choice model of user
behavior. The approach allows to analyze different fare structures,
optimization objectives, and operational scenarios involving, e.g.,
subsidies. We use the resulting models to compute optimized fare
systems for the city of Potsdam, Germany.
A biological regulatory network can be modeled as a discrete function f that contains all available information on network component interactions. From f we can derive a graph representation of the network structure as well as of the dynamics of the system. In this paper we introduce a method to identify modules of the network that allow us to construct the behavior of f from the dynamics of the modules. Here, it proves useful to distinguish between dynamical and structural modules, and to define network modules combining aspects of both.
As a key concept we establish the notion of symbolic steady state, which basically represents a set of states where the behavior of f is in some sense predictable, and which gives rise to suitable network modules.
We apply the method to a regulatory network involved in T helper cell differentiation.
The mathematical treatment of planning problems in public transit has made significant advances in the last decade. Among others, the classical problems of vehicle and crew scheduling can nowadays be solved on a routine basis using combinatorial optimization methods. This is not yet the case for problems that pertain to the design of public transit networks, and for the problems of operations control that address the implementation of a schedule in the presence of disturbances. The article gives a sketch of the state and important developments in these areas, and it addresses important challenges. The vision is that mathematical tools of computer aided scheduling (CAS) will soon play a similar role in the design and operation of public transport systems as CAD systems in manufacturing.
In this paper we investigate the fare planning model for public
transport, which consists in designing a system of fares maximizing
the revenue. We discuss a discrete choice model in which passengers
choose between different travel alternatives to express the demand as
a function of fares. Furthermore, we give a computational example for
the city of Potsdam and discuss some theoretical aspects.
This paper introduces the line connectivity
problem, a generalization of the Steiner tree problem and a
special case of the line planning problem. We study its complexity and
give an IP formulation in terms of an exponential number of
constraints associated with "line cut constraints". These inequalities
can be separated in polynomial time. We also generalize the Steiner
partition inequalities.
Every day, millions of people are transported by buses, trains, and airplanes
in Germany. Public transit (PT) is of major importance for the quality of
life of individuals as well as the productivity of entire regions. Quality and
efficiency of PT systems depend on the political framework (state-run, market
oriented) and the suitability of the infrastructure (railway tracks, airport
locations), the existing level of service (timetable, flight schedule), the use
of adequate technologies (information, control, and booking systems), and
the best possible deployment of equipment and resources (energy, vehicles,
crews). The decision, planning, and optimization problems arising in this
context are often gigantic and “scream” for mathematical support because of
their complexity.
This article sketches the state and the relevance of mathematics in planning
and operating public transit, describes today’s challenges, and suggests a
number of innovative actions.
The current contribution of mathematics to public transit is — depending
on the transportation mode — of varying depth. Air traffic is already well
supported by mathematics. Bus traffic made significant advances in recent
years, while rail traffic still bears significant opportunities for improvements.
In all areas of public transit, the existing potentials are far from being exhausted.
For some PT problems, such as vehicle and crew scheduling in bus and
air traffic, excellent mathematical tools are not only available, but used in
many places. In other areas, such as rolling stock rostering in rail traffic,
the performance of the existing mathematical algorithms is not yet sufficient.
Some topics are essentially untouched from a mathematical point
of view; e.g., there are (except for air traffic) no network design or fare
planning models of practical relevance. PT infrastructure construction is
essentially devoid of mathematics, even though enormous capital investments
are made in this area. These problems lead to questions that can only be
tackled by engineers, economists, politicians, and mathematicians in a joint
effort.
Among other things, the authors propose to investigate two specific topics,
which can be addressed at short notice, are of fundamental importance not
only for the area of traffic planning, should lead to a significant improvement
in the collaboration of all involved parties, and, if successful, will be of real
value for companies and customers:
• discrete optimal control: real-time re-planning of traffic systems in case
of disruptions,
• model integration: service design in bus and rail traffic.
Work on these topics in interdisciplinary research projects could be funded
by the German ministry of research and education (BMBF), the German
ministry of economics (BMWi), or the German science foundation (DFG).
Den kürzesten Weg in einem Graphen zu finden ist ein klassisches Problem der Graphentheorie. Über einen Vortrag zu diesem Thema beim Tag der Mathematik 2007 von R. Borndörfer kam ich in Kontakt mit dem Konrad-Zuse-Zentrum (ZIB), das sich u.a. mit Wegeoptimierung beschäftigt. Ein Forschungsschwerpunkt dort ist im Rahmen eines Projekts zur Chipverifikation das Zählen von Lösungen, das, wie wir sehen werden, eng mit dem Zählen von Wegen zusammenhängt.
Anhand von zwei Fragen aus der Graphentheorie soll diese Facharbeit unterschiedliche Lösungsmethoden untersuchen. Wie bestimmt man den kürzesten Weg zwischen zwei Knoten in einem Graphen und wie findet man alle möglichen Wege?
Nach einer Einführung in die Graphentheorie und einer Konkretisierung der Probleme wird zunächst für beide eine Lösung mit auf Graphen basierenden Algorithmen vorgestellt. Während der Algorithmus von Dijkstra sehr bekannt ist, habe ich für das Zählen von Wegen einen eigenen Algorithmus auf der Basis der Tiefensuche entwickelt.
Im zweiten Teil der Arbeit wird das Konzept der ganzzahligen Programmierung vorgestellt und die Lösungsmöglichkeiten für Wegeprobleme, die sich darüber ergeben.
The timetable is the essence of the service offered by any provider
of public transport'' (Jonathan Tyler, CASPT 2006). Indeed, the
timetable has a major impact on both operating costs and on passenger
comfort. Most European agglomerations and railways use periodic timetables in which operation repeats in regular intervals. In contrast, many North and South American municipalities use trip timetables in which the vehicle trips are scheduled individually subject to frequency constraints. We compare these two
strategies with respect to vehicle operation costs. It turns out that
for short time horizons, periodic timetabling can be suboptimal; for
sufficiently long time horizons, however, periodic timetabling can
always be done in an optimal way'.
Line planning is an important step in the strategic planning process of a public transportation system. In this paper, we discuss an optimization model for this problem in order to minimize operation costs while guaranteeing a certain level of quality of service, in terms of available transport capacity. We analyze the problem for path and tree network topologies as well as several categories of line operation that are important for the Quito Trolebus system. It turns out that, from a computational complexity worst case point of view, the problem is hard in all but the most simple variants. In practice, however, instances based on real data from the Trolebus System in Quito can be solved quite well, and significant optimization potentials can be demonstrated.
Line planning is an important step in the strategic planning
process of a public transportation system. In this paper, we discuss an
optimization model for this problem in order to minimize operation costs
while guaranteeing a certain level of quality of service, in terms of
available transport capacity. We analyze the problem for path and tree
network topologies as well as several categories of line operation that
are important for the Quito Trolebus system. It turns out that, from a
computational complexity worst case point of view, the problem is hard in
all but the most simple variants. In practice, however, instances based
on real data from the Trolebus System in Quito can be solved quite well,
and significant optimization potentials can be demonstrated.
We apply network flow techniques to find good exit selections for evacuees in an emergency evacuation. More precisely, we present two algorithms for computing exit distributions using both classical flows and flows over time which are well known from combinatorial optimization. The performance of these new proposals is compared to a simple shortest path approach and to a best response dynamics approach by using a
cellular automaton model.
For an analysis of a molecular system from a computational statistical thermodynamics point of view, extensive
molecular dynamics simulations are very inefficient. During this procedure, at lot of redundant data is generated. Whereas
the algorithms spend most of the computing time for a sampling of configurations within the basins of the potential energy
landscape of the molecular system, the important information about the long-time behaviour of the molecules is given by
transition regions and barriers between the basins, which are sampled rarely only. Thinking of molecular dynamics trajectories,
researchers try to figure out which kind of dynamical model is suitable for an efficient simulation. This article suggests to
change the point of view from extensive simulation of molecular dynamics trajectories to more efficient sampling strategies
of the conformation dynamics approach.
For the treatment of equilibrated molecular systems in a heat bath we
propose a transition state theory that is based on conformation dynamics.
In general, a set-based discretization of a Markov operator P does not
preserve the Markov property. In this article, we propose a discretization
method which is based on a Galerkin approach. This discretization
method preserves the Markov property of the operator and can be interpreted
as a decomposition of the state space into (fuzzy) sets. The
conformation-based transition state theory presented here can be seen as
a first step in conformation dynamics towards the computation of essential
dynamical properties of molecular systems without time-consuming
molecular dynamics simulations.
This article deals with an efficient sampling of the stationary distribution
of dynamical systems in the presence of metastabilities. For such
systems, standard sampling schemes suffer from trapping problems and
critical slowing down. Starting multiple trajectories in different regions of
the sampling space is a promising way out. The different samplings represent
the stationary distribution locally very well, but are still far away
from ergodicity or from the global stationary distribution. We will show
how these samplings can be joined together in order to get one global
sampling of the stationary distribution.
Biochemical interactions are determined by the 3D-structure of the involved components –
thus the identification of conformations is a key for many applications in rational drug design.
ConFlow is a new multilevel approach to conformational analysis with main focus on
completeness in investigation of conformational space.
In contrast to known conformational analysis, the starting point for design is a space-based
description of conformational areas. A tight integration of sampling and analysis leads to an
identification of conformational areas simultaneously during sampling. An incremental
decomposition of high-dimensional conformational space is used to guide the analysis. A new
concept for the description of conformations and their path connected components based on
convex hulls and Hypercubes is developed. The first results of the ConFlow application
constitute a ‘proof of concept’ and are further more highly encouraging. In comparison to
conventional industrial applications, ConFlow achieves higher accuracy and a specified
degree of completeness with comparable effort.
In order to compute the thermodynamic weights of the different metastable conformations
of a molecule, we want to approximate the molecule’s Boltzmann distribution in a reasonable
time. This is an essential issue in computational drug design. The energy landscape of active
biomolecules is generally very rough with a lot of high barriers and low regions. Many of the
algorithms that perform such samplings (e.g. the hybrid Monte Carlo method) have difficulties
with such landscapes. They are trapped in low-energy regions for a very long time and cannot
overcome high barriers. Moving from one low-energy region to another is a very rare event. For
these reasons, the distribution of the generated sampling points converges very slowly against
the thermodynamically correct distribution of the molecule.
The idea of ConfJump is to use a priori knowledge of the localization of low-energy regions
to enhance the sampling with artificial jumps between these low-energy regions. The artificial
jumps are combined with the hybrid Monte Carlo method. This allows the computation of
some dynamical properties of the molecule. In ConfJump, the detailed balance condition is
satisfied and the mathematically correct molecular distribution is sampled.
The identification of metastable conformations of molecules plays an
important role in computational drug design. One main difficulty is the
fact that the underlying dynamic processes take place in high dimensional
spaces. Although the restriction of degrees of freedom to a few dihedral
angles significantly reduces the complexity of the problem, the existing
algorithms are time-consuming. They are mainly based on the approximation
of a transfer operator by an extensive sampling of states according
to the Boltzmann distribution and short-time Hamiltonian dynamics simulations.
We present a method which can identify metastable conformations
without sampling the complete distribution. Our algorithm is based
on local transition rates and uses only pointwise information about the
potential energy surface. In order to apply the cluster algorithm PCCA+,
we compute a few eigenvectors of the rate matrix by the Jacobi-Davidson
method. Interpolation techniques are applied to approximate the thermodynamical
weights of the clusters. The concluding example illustrates
our approach for epigallocatechine, a molecule which can be described by
seven dihedral angles.
We give an introduction into the fascinating area of flows over time - also called "dynamic flows" in the literature. Starting from the early work of Ford and Fulkerson on maximum flows over time, we cover many exciting results that have been obtained over the last fifty years. One purpose of this paper is to serve as a possible basis for teaching network flows over time in an advanced course on combinatorial optimization.
This paper discusses how to build a solver for mixed integer quadratically constrained programs (MIQCPs) by extending a framework for constraint integer programming (CIP). The advantage of this approach is that we can utilize the full power of advanced MIP and CP technologies. In particular, this addresses the linear relaxation and the discrete components of the problem. For relaxation, we use an outer approximation generated by linearization of convex constraints and linear underestimation of nonconvex constraints. Further, we give an overview of the reformulation, separation, and propagation techniques that are used to handle the quadratic constraints efficiently.
We implemented these methods in the branch-cut-and-price framework SCIP. Computational experiments indicates the potential of the approach.
Pseudo-Boolean problems lie on the border between satisfiability problems, constraint programming, and integer programming. In particular, nonlinear constraints in pseudo-Boolean optimization can be handled by methods arising in these different fields: One can either linearize them and work on a linear programming relaxation or one can treat them directly by propagation. In this paper, we investigate the individual strengths of these approaches and compare their computational performance. Furthermore, we integrate these techniques into a branch-and-cut-and-propagate framework, resulting in an efficient nonlinear pseudo-Boolean solver.
Hybrid Branching
(2009)
State-of-the-art solvers for Constraint Satisfaction Problems (CSP), Mixed Integer Programs (MIP), and
satisfiability problems (SAT) are usually based on a branch-and-bound algorithm.
The question how to split a problem into subproblems (\emph{branching}) is in the core of any branch-and-bound algorithm.
Branching on individual variables is very common in CSP, MIP, and SAT. The rules, however, which variable to choose for
branching, differ significantly. In this paper, we present hybrid branching, which combines selection rules from all three fields.
This article introduces constraint integer programming (CIP), which is a novel way to combine constraint programming (CP) and mixed integer programming (MIP) methodologies. CIP is a generalization of MIP that supports the notion of general constraints as in CP. This approach is supported by the CIP framework SCIP, which also integrates techniques for solving satisfiability problems. SCIP is available in source code and free for noncommercial use.
We demonstrate the usefulness of CIP on three tasks. First, we apply the constraint integer programming approach to pure mixed integer programs. Computational experiments show that SCIP is almost competitive to current state-of-the-art commercial MIP solvers. Second, we demonstrate how to use CIP techniques to compute the number of optimal solutions of integer programs. Third, we employ the CIP framework to solve chip design verification problems, which involve some highly nonlinear constraint types that are very hard to handle by pure MIP solvers. The CIP approach is very effective here: it can apply the full sophisticated MIP machinery to the linear part of the problem, while dealing with the nonlinear constraints by employing constraint programming techniques.
In the first part of this article, we have shown how time-dependent optimal control for partial
differential equations can be realized in a modern high-level modeling and simulation package. In this second part we extend our approach to (state) constrained problems. "Pure" state constraints in a function space
setting lead to non-regular Lagrange multipliers (if they exist), i.e. the Lagrange multipliers are in general Borel
measures. This will be overcome by different regularization techniques.
To implement inequality constraints, active set methods and interior point methods (or barrier methods) are widely in use. We show how these techniques can be realized in the modeling and simulation package Comsol
Multiphysics.
In contrast to the first part, only the one-shot-approach based on space-time elements is considered. We implemented a projection method based on active sets as well as a barrier method and compare these methods
by a specialized PDE optimization program, and a program that optimizes the discrete version of the given problem.
In this paper we present a strategy to solve parabolic optimal control problems
using available specialized elliptic PDE solvers. We aim at an indirect solution
approach, i.e. developing optimality conditions in function spaces that are then
discretized and solved. Classes of problems where optimality conditions can be derived
as coupled systems of parabolic partial differential equations are considered.
We consider a simultaneous space-time discretization. We verify that for our model
problems the parabolic forward-backward system of PDEs can equivalently be expressed
by a single elliptic boundary value problem in the space-time domain. This
fact has been used as a motivation for space-time-multigrid solution approaches,
which may also be an option in our context.
The theoretical base developed for the example problems then allows to apply
specialized elliptic PDE solvers to the optimality system without much implementational
effort. Numerical experiments for some example problems are conducted and
underline the applicability of this approach.
Pseudo-Boolean problems generalize SAT problems by allowing linear constraints and a linear objective function. Different solvers, mainly having their roots in the SAT domain, have been proposed and compared,for instance, in Pseudo-Boolean evaluations. One can also formulate Pseudo-Boolean models as integer programming models. That is,Pseudo-Boolean problems lie on the border between the SAT domain and the integer programming field.
In this paper, we approach Pseudo-Boolean problems from the integer programming side. We introduce the framework SCIP that implements constraint integer programming techniques. It integrates methods from constraint programming, integer programming, and SAT-solving: the solution of linear programming relaxations, propagation of linear as well as nonlinear constraints, and conflict analysis. We argue that this approach is suitable for Pseudo-Boolean instances containing general linear constraints, while it is less efficient for pure SAT problems. We present extensive computational experiments on the test set used for the Pseudo-Boolean evaluation 2007. We show that our approach is very efficient for optimization instances and competitive for feasibility problems. For the nonlinear parts, we also investigate the influence of linear programming relaxations and propagation methods on the performance. It turns out that both techniques are helpful for obtaining an efficient solution method.