Refine
Year of publication
- 2012 (153) (remove)
Keywords
- Robust Optimization (4)
- primal heuristic (4)
- Large Neighborhood Search (3)
- Navier-Stokes equations (3)
- molecular dynamics (3)
- stochastic programming (3)
- ANOVA decomposition (2)
- FPTAS (2)
- Knapsack Problem (2)
- Markov models (2)
Primal heuristics are an important component of state-of-the-art codes for mixed integer programming. In this paper, we focus on primal heuristics that only employ computationally inexpensive procedures such as rounding and logical deductions (propagation). We give an overview of eight different approaches. To assess the impact of these primal heuristics on the ability to find feasible solutions, in particular early during search, we introduce a new performance measure, the primal integral. Computational experiments evaluate this and other measures on MIPLIB~2010 benchmark instances.
Some optimal control problems for linear and nonlinear ordinary differential equations related to the optimal switching between
different magnetic fields are considered. The main aim is to move an electrical initial current by a controllable
voltage in shortest time to a desired terminal current and to hold it afterwards. Necessary optimality conditions are derived by
Pontryagin's principle and a Lagrange technique. In the case of a linear system, the principal structure of time-optimal controls is
discussed. The associated optimality systems are solved by a one-shot strategy
using a multigrid software package. Various numerical examples are discussed.
Recently, Khuller, Moss and Naor presented a greedy algorithm for the budgeted maximum coverage problem. In this note, we observe that this algorithm also approximates a special case of set-union knapsack problem within a constant factor. In the special case, an element is a member of less than a constant number of subsets. This guarantee naturally extends to densest k-subgraph problem on graphs of bounded degree.
Primal-dual linear Monte Carlo algorithm for multiple stopping - An application to flexible caps
(2012)
In this paper we consider the valuation of Bermudan callable derivatives with
multiple exercise rights. We present in this context a new primal-dual linear
Monte Carlo algorithm that allows for ecient simulation of lower and upper price
bounds without using nested simulations (hence the terminology). The algorithm
is essentially an extension of a primal{dual Monte Carlo algorithm for standard
Bermudan options proposed in Schoenmakers et al. (2011), to the case of multiple
exercise rights. In particular, the algorithm constructs upwardly a system of dual
martingales to be plugged into the dual representation of Schoenmakers (2010).
At each level the respective martingale is constructed via a backward regression
procedure starting at the last exercise date. The thus constructed martingales are
nally used to compute an upper price bound. At the same time, the algorithm
also provides approximate continuation functions which may be used to construct
a price lower bound. The algorithm is applied to the pricing of
exible caps
in a Hull and White (1990) model setup. The simple model choice allows for
comparison of the computed price bounds with the exact price which is obtained
by means of a trinomial tree implementation. As a result, we obtain tight price
bounds for the considered application. Moreover, the algorithm is generically
designed for multi-dimensional problems and is tractable to implement.
Persistence of rogue waves in extended nonlinear Schrödinger equations: Integrable Sasa-Satsuma case
(2012)
We present the lowest order rogue wave solution of the Sasa-Satsuma equation (SSE) which is one of the integrable extensions of the nonlinear Schrödinger equation (NLSE). In contrast to the Peregrine solution of the NLSE, it is significantly more involved and contains polynomials of fourth order rather than second order in the corresponding expressions. The correct limiting case of Peregrine solution appears when the extension parameter of the SSE is reduced to zero.
Cubature methods, a powerful alternative to Monte Carlo due to Kusuoka [Adv. Math. Econ. 6, 69–83, 2004] and Lyons–Victoir [Proc. R. Soc. Lond. Ser. A 460, 169–198, 2004], involve the solution to numerous auxiliary ordinary differential equations. With focus on the Ninomiya-Victoir algorithm [Appl. Math. Fin. 15, 107–121, 2008], which corresponds to a concrete level 5 cubature method, we study some parametric diffusion models motivated from financial applications, and exhibit structural conditions under which all involved ODEs can be solved explicitly and efficiently. We then enlarge the class of models for which this technique applies, by introducing a (model-dependent) variation of the Ninomiya-Victoir method. Our method remains easy to implement; numerical examples illustrate the savings in computation time.
We introduce an algorithm for
diffusion weighted magnetic resonance imaging data enhancement based on structural adaptive smoothing in both space and diffusion direction.
The method, called POAS, does not refer to a specific model for the data, like the diffusion tensor or higher order models.
It works by embedding the measurement space into a space with defined metric and group operations, in this case the Lie group of three-dimensional Euclidean motion SE(3).
Subsequently, pairwise comparisons of the values of the diffusion
weighted signal are used for adaptation.
The position-orientation adaptive smoothing preserves the edges of the observed fine and anisotropic structures.
The POAS-algorithm is designed to reduce noise directly in the diffusion weighted images and consequently also to reduce bias and
variability of quantities derived from the data for specific models.
We evaluate the algorithm on simulated and experimental data and demonstrate that it can be used to reduce the number of applied diffusion gradients and
hence acquisition time while achieving similar quality of data, or to improve the quality of data acquired in a clinically feasible scan time setting.
In this article we propose a novel approach to reduce the computational complexity
of the dual method for pricing American options. We consider a sequence of
martingales that converges to a given target martingale and decompose the original
dual representation into a sum of representations that correspond to dierent levels
of approximation to the target martingale. By next replacing in each representation
true conditional expectations with their Monte Carlo estimates, we arrive at what
one may call a multilevel dual Monte Carlo algorithm. The analysis of this algorithm
reveals that the computational complexity of getting the corresponding target upper
bound, due to the target martingale, can be signicantly reduced. In particular, it
turns out that using our new approach, we may construct a multilevel version of the
well-known nested Monte Carlo algorithm of Andersen and Broadie (2004) that is,
regarding complexity, virtually equivalent to a non-nested algorithm. The performance
of this multilevel algorithm is illustrated by a numerical example.
In this paper, we study the dual representation for generalized multiple stopping problems,
hence the pricing problem of general multiple exercise options. We derive a dual representation which allows for cashflows which are subject to volume constraints modeled by
integer valued adapted processes and refraction periods modeled by stopping times. As
such, this extends the works by Schoenmakers (2010), Bender (2011a), Bender (2011b),
Aleksandrov and Hambly (2010), and Meinshausen and Hambly (2004) on multiple exercise
options, which either take into consideration a refraction period or volume constraints, but
not both simultaneously. We also allow more flexible cashflow structures than the additive
structure in the above references. For example some exponential utility problems are covered
by our setting. We supplement the theoretical results with an explicit Monte Carlo algorithm
for constructing confidence intervals for the price of multiple exercise options and exemplify
it by a numerical study on the pricing of a swing option in an electricity market.
Optimal Identification of Semi-Rigid Domains in Macromolecules from Molecular Dynamics Simulation
(2012)
Biological function relies on the fact that biomolecules can switch between different conformations and aggregation states.
Such transitions involve a rearrangement of parts of the biomolecules involved that act as dynamic domains. The reliable
identification of such domains is thus a key problem in biophysics. In this work we present a method to identify semi-rigid
domains based on dynamical data that can be obtained from molecular dynamics simulations or experiments. To this end
the average inter-atomic distance-deviations are computed. The resulting matrix is then clustered by a constrained
quadratic optimization problem. The reliability and performance of the method are demonstrated for two artificial peptides.
Furthermore we correlate the mechanical properties with biological malfunction in three variants of amyloidogenic
transthyretin protein, where the method reveals that a pathological mutation destabilizes the natural dimer structure of the
protein. Finally the method is used to identify functional domains of the GroEL-GroES chaperone, thus illustrating the
efficiency of the method for large biomolecular machines.
RENS – the optimal rounding
(2012)
This article introduces RENS, the relaxation enforced neighborhood search, a large neighborhood search algorithm for mixed integer nonlinear programming (MINLP) that uses a sub-MINLP to explore the set of feasible roundings of an optimal solution x' of a linear or nonlinear relaxation. The sub-MINLP is constructed by fixing integer variables x_j with x'_j in Z and bounding the remaining integer variables to x_j in {floor(x'_j), ceil(x'_j)}. We describe two different applications of RENS: as a standalone algorithm to compute an optimal rounding of the given starting solution and as a primal heuristic inside a complete MINLP solver.
We use the former to compare different kinds of relaxations and the impact of cutting planes on the roundability of the corresponding optimal solutions. We further utilize RENS to analyze the performance of three rounding heuristics implemented in the branch-cut-and-price framework SCIP. Finally, we study the impact of RENS when it is applied as a primal heuristic inside SCIP.
All experiments were performed on three publically available test sets of mixed integer linear programs (MIPs), mixed integer quadratically constrained programs (MIQCPs), and MINLPs, using solely software which is available in source code.
It turns out that for these problem classes 60% to 70% of the instances have roundable relaxation optima and that the success rate of RENS does not depend on the percentage of fractional variables. Last but not least, RENS applied as primal heuristic complements nicely with existing root node heuristics in SCIP and improves the overall performance.
Learning during search allows solvers for discrete optimization problems to remember parts of the search that they have already performed and avoid revisiting redundant parts. Learning approaches pioneered by the SAT and CP communities have been successfully incorporated into the SCIP constraint integer programming platform. In this paper we show that performing a heuristic constraint programming search during root node processing of a binary program can rapidly learn useful nogoods, bound changes, primal solutions, and branching statistics that improve the remaining IP search.
We present Undercover, a primal heuristic for nonconvex mixed-integer nonlinear programming (MINLP) that explores a mixed-integer linear subproblem (sub-MIP) of a given MINLP. We solve a vertex covering problem to identify a minimal set of variables that need to be fixed in order to linearize each constraint, a so-called cover. Subsequently, these variables are fixed to values obtained from a reference point, e.g., an optimal solution of a linear relaxation. We apply domain propagation and conflict analysis to try to avoid infeasibilities and learn from them, respectively. Each feasible solution of the sub-MIP corresponds to a feasible solution of the original problem.
We present computational results on a test set of mixed-integer quadratically constrained programs (MIQCPs) and general MINLPs from MINLPLib. It turns out that the majority of these instances allow for small covers. Although general in nature, the heuristic appears most promising for MIQCPs, and complements nicely with existing root node heuristics in different state-of-the-art solvers.
We present Undercover, a primal heuristic for mixed-integer nonlinear programming (MINLP). The heuristic constructs a mixed-integer linear subproblem (sub-MIP) of a given MINLP by fixing a subset of the variables. We solve a set covering problem to identify a minimal set of variables which need to be fixed in order to linearise each constraint. Subsequently, these variables are fixed to approximate values, e.g. obtained from a linear outer approximation. The resulting sub-MIP is solved by a mixed-integer linear programming solver. Each feasible solution of the sub-MIP corresponds to a feasible solution of the original problem. Although general in nature, the heuristic seems most promising for mixed-integer quadratically constrained programmes (MIQCPs). We present computational results on a general test set of MIQCPs selected from the MINLPLib.
We provide a computational study of the performance of a state-of-the-art solver for nonconvex mixed-integer quadratically constrained programs (MIQCPs). Since successful general-purpose solvers for large problem classes necessarily comprise a variety of algorithmic techniques, we focus especially on the impact of the individual solver components. The solver SCIP used for the experiments implements a branch-and-cut algorithm based on linear outer approximation to solve MIQCPs to global optimality. Our analysis is based on a set of 86 publicly available test instances.
We propose a hybrid approach for solving the resource-constrained project scheduling problem which is an extremely hard to solve combinatorial optimization problem of practical relevance. Jobs have to be scheduled on (renewable) resources subject to precedence constraints such that the resource capacities are never exceeded and the latest completion time of all jobs is minimized. The problem has challenged researchers from different communities, such as integer programming (IP), constraint programming (CP), and satisfiability testing (SAT). Still, there are instances with 60 jobs which have not been solved for many years. The currently best known approach, lazyFD, is a hybrid between CP and SAT techniques. In this paper we propose an even stronger hybridization by integrating all the three areas, IP, CP, and SAT, into a single branch-and-bound scheme. We show that lower bounds from the linear relaxation of the IP formulation and conflict analysis are key ingredients for pruning the search tree. First computational experiments show very promising results. For five instances of the well-known PSPLIB we report an improvement of lower bounds. Our implementation is generic, thus it can be potentially applied to similar problems as well.
Large neighborhood search (LNS) heuristics are an important component of modern branch-and-cut algorithms for solving mixed-integer linear programs (MIPs). Most of these LNS heuristics use the LP relaxation as the basis for their search, which is a reasonable choice in case of MIPs. However, for more general problem classes, the LP relaxation alone may not contain enough information about the original problem to find feasible solutions with these heuristics, e.g., if the problem is nonlinear or not all constraints are present in the current relaxation. In this paper, we discuss a generic way to extend LNS heuristics that have been developed for MIP to constraint integer programming (CIP), which is a generalization of MIP in the direction of constraint programming (CP). We present computational results of LNS heuristics for three problem classes: mixed-integer quadratically constrained programs, nonlinear pseudo-Boolean optimization instances, and resource-constrained project scheduling problems. Therefore, we have implemented extended versions of the following LNS heuristics in the constraint integer programming framework SCIP: Local Branching, RINS, RENS, Crossover, and DINS. Our results indicate that a generic generalization of LNS heuristics to CIP considerably improves the success rate of these heuristics.
Energetic reasoning is one of the most powerful propagation algorithms in cumulative scheduling. In practice, however, it is not commonly used because it has a high running time and its success highly depends on the tightness of the variable bounds. In order to speed up energetic reasoning, we provide an easy-to-check necessary condition for energetic reasoning to detect infeasibilities. We present an implementation of energetic reasoning that employs this condition and that can be parametrically adjusted to handle the trade-off between solving time and propagation overhead. Computational results on instances from the PSPLIB are provided. These results show that using this condition decreases the running time by more than a half, although more search nodes need to be explored.
In this paper a bottom-up approach of automatic simplification of a railway network is presented. Starting from a very detailed, microscopic level, as it is used in railway simulation, the network is transformed by an algorithm to a less detailed level (macroscopic network), that is sufficient for long-term planning and optimization. In addition running and headway times are rounded to a pre-chosen time discretization by a special cumulative method, which we will present and analyse in this paper. After the transformation we fill the network with given train requests to compute an optimal slot allocation. Then the optimized schedule is re-transformed into the microscopic level and can be simulated without any conflicts occuring between the slots. The algorithm is used to transform the network of the very dense Simplon corridor between Swiss and Italy. With our aggregation it is possible for the first time to generate a profit maximal and conflict free timetable for the corridor across a day by a simultaneously optimization run.
Im Zuge der Übernahme von 6 Linien der Havelbus Verkehrsgesellschaft mbH durch die ViP Verkehr in Potsdam GmbH ergab sich 2009 die Notwendigkeit der Entwicklung eines neuen Linien- und Taktplans für das Jahr 2010. Das Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) entwickelt in einem Projekt des DFG-Forschungszentrums Matheon ein Verfahren zur mathematischen Linienoptimierung. Dieses Tool wurde bei der Optimierung des ViP Linienplans 2010 in einer projektbegleitenden Studie eingesetzt, um Alternativen bei verschiedenen Planungs- und Zielvorgaben auszuloten. In dem Artikel wird eine Auswertung der Ergebnisse mit dem Verkehrsanalysesystem Visum der PTV AG beschrieben. Die Auswertungen bestätigen, dass mit Hilfe von mathematischer Optimierung eine weitere Verkürzung der Reisezeit um 1%, eine als um 6% verkürzt empfundene Reisezeit, 10% weniger Fahrzeit im Fahrzeug und eine gleichzeitige Kostenreduktion um 5% möglich sind
The hypergraph assignment problem (HAP) is the generalization of assignments
from directed graphs to directed hypergraphs. It serves, in particular,
as a universal tool to model several train composition rules in vehicle rotation
planning for long distance passenger railways. We prove that even for problems
with a small hyperarc size and hypergraphs with a special partitioned structure
the HAP is NP-hard and APX-hard. Further, we present an extended integer
linear programming formulation which implies, e. g., all clique inequalities.
Vehicle rotation planning is a fundamental problem in rail transport. It decides how the railcars, locomotives, and carriages are operated in order to implement the trips of the timetable. One important planning requirement is operational regularity, i.e., using the rolling stock in the same way on every day of operation. We propose to take regularity into account by modeling the vehicle rotation planning problem as a minimum cost hyperassignment problem (HAP). Hyperassignments are generalizations of assignments from directed graphs to directed hypergraphs. Finding a minimum cost hyperassignment is NP-hard. Most instances arising from regular vehicle rotation planning, however, can be solved well in practice. We show that, in particular, clique inequalities strengthen the canonical LP relaxation substantially.
Rapid Branching
(2012)
We propose rapid branching (RB) as a general branch-and-bound heuristic for solving large scale optimization problems in traffic and transport. The key idea is to combine a special branching rule and a greedy node selection strategy in order to produce solutions of controlled quality rapidly and efficiently. We report on three successful applications of the method for integrated vehicle and crew scheduling, railway track allocation, and railway vehicle rotation planning.
This paper provides a generic formulation for rolling stock planning problems in the context of intercity passenger traffic. The main contributions are a graph theoretical model and a Mixed-Integer-Programming formulation that integrate all main requirements of the considered Vehicle-Rotation-Planning problem (VRPP). We show that it is possible to solve this model for real-world instances provided by our industrial partner DB Fernverkehr AG using modern algorithms and computers.
We propose a model for the integrated optimization of vehicle rotations and vehicle compositions in long distance railway passenger transport. The main contribution of the paper is a hypergraph model that is able to handle the challenging technical requirements as well as very general stipulations with respect to the "regularity" of a schedule. The hypergraph model directly generalizes network flow models, replacing arcs with hyperarcs. Although NP-hard in general, the model is computationally well-behaved in practice. High quality solutions can be produced in reasonable time using high performance Integer Programming techniques, in particular, column generation and rapid branching. We show that, in this way, large-scale real world instances of our cooperation partner DB Fernverkehr can be solved.
Today the railway timetabling process and the track allocation is one of the most challenging problems to solve by a railway company. Especially due to the deregulation of the transport market in the recent years several suppliers of railway traffic have entered the market in Europe. This leads to more potential conflicts between trains caused by an increasing demand of train paths. Planning and operating railway transportation systems is extremely hard due to the combinatorial complexity of the underlying discrete optimization problems, the technical intricacies, and the immense size of the problem instances. In order to make best use of the infrastructure and to ensure economic operation, efficient planning of the railway operation is indispensable. Mathematical optimization models and algorithms can help to automatize and tackle these challenges. Our contribution in this paper is to present a renewed planning process due to the liberalization in Europe and an associated concept for track allocation, that consists of three important parts, simulation, aggregation, and optimization. Furthermore, we present results of our general framework for real world data.
The track allocation problem, also known as train routing problem or train timetabling problem, is to find a conflict-free set of train routes of maximum value in a railway network. Although it can be modeled as a standard path packing problem, instances of sizes relevant for real-world railway applications could not be solved up to now. We propose a rapid branching column generation approach that integrates the solution of the LP relaxation of a path coupling formulation of the problem with a special rounding heuristic. The approach is based on and exploits special properties of the bundle method for the approximate solution of convex piecewise linear functions. Computational results for difficult instances of the benchmark library TTPLIB are reported.
In this paper we investigate two different recoverable robust models to deal with cost uncertainties in a shortest path problem. Recoverable robustness extends the classical concept of robustness to deal with uncertainties by incorporating limited recovery actions after the
full data are revealed. Our first model focuses on the case where the recovery actions are quite restricted: after a simple path is fixed in the first stage, in the second stage, after all data are revealed, any path containing at most k new arcs may be chosen.
Thus, the parameter k can be interpreted as a mediator between
robust optimization - no changes allowed - and optimization
on the fly - an arbitrary solution can be chosen. Considering three
classical scenario sets, which model uncertainties in the cost function,
we show that this new problem is strongly NP-hard in all
these cases and is not approximable, unless P=NP.
This is in contrast to the robust shortest path problem, where, for
example, an optimal solution can be computed efficiently for interval
and Gamma-scenarios. For series-parallel graphs and interval scenarios,
we present a polynomial time algorithm for this recoverable robust
setting.
In our second model the recovery set, i.e., the set of paths selectable
in the second stage is not limited, but deviating from the previous
choice comes at extra cost. Thus, a path chosen in the first stage
produces renting costs modeled as an alpha-fraction of the scenario
cost. For an arc taken in the second stage the remaining cost needs
to be paid in addition to some extra inflation cost modeled by a beta-fraction
of the scenario cost, if the arc was not reserved beforehand. The
complexity status of this problem is similar to the robust case. Yet,
for Gamma-scenarios the problem is again strongly NP-hard,
but can be approximated.
In multicriteria optimization, a compromise solution is a feasible solution whose
cost vector minimizes the distance to the ideal point w.r.t. a given norm. The coor-
dinates of the ideal point are given by the optimal values for the single optimization
problem for each criterion.
We show that the concept of compromise solutions ts nicely into the existing
notion of Pareto optimality: For a huge class of norms, every compromise solution
is Pareto optimal, and under certain conditions on the norm all Pareto optimal so-
lution are also a compromise solution, for an appropriate weighting of the criteria.
Furthermore, under similar conditions on the norm, the existence of an FPTAS for
compromise solutions guarantees the approximability of the Pareto set.
These general results are completed by applications to classical combinatorial
optimization problems. In particular, we study approximation algorithms for the
multicriteria shortest path problem and the multicriteria minimum spanning tree
problem. On the one hand, we derive approximation schemes for both problems, on
the other hand we show that for the latter problem simple approaches like local search
and greedy techniques do not guarantee good approximation factors.
In this paper, we investigate the recoverable robust knapsack problem,
where the uncertainty of the item weights follows the approach of Bertsimas and
Sim. In contrast to the robust approach, a limited recovery action is allowed,
i.e., up to k items may be removed when the actual weights are known. This problem
is motivated by the assignment of traffic nodes to antennas in wireless network
planning. Starting from an exponential min-max optimization model, we derive an
integer linear programming formulation of quadratic size. In a preliminary computational
study, we evaluate the gain of recovery using realistic planning data.
The knapsack problem is one of the basic problems in combinatorial optimization. In real-world applications it is often part of a more complex problem. Examples are machine capacities in production planning or bandwidth restrictions in telecommunication network design. Due to unpredictable future settings or erroneous data, parameters of such a subproblem are subject to uncertainties.
In high risk situations a robust approach should be chosen to deal with these uncertainties.
Unfortunately, classical robust optimization outputs solutions with little profit by prohibiting any adaption of the solution when the actual realization of the uncertain parameters is known.
This ignores the fact that in most settings minor changes to a previously determined solution are possible. To overcome these drawbacks we allow a limited recovery of a previously fixed item set as soon as the data are known by deleting at most k items and adding up to l new items.
We consider the complexity status of this recoverable robust knapsack problem and extend the classical concept of cover inequalities to obtain stronger polyhedral descriptions. Finally, we present two extensive computational studies to investigate the influence of parameters k and l to the objective and evaluate the effectiveness of our new class of valid inequalities.
We consider a sorting problem from railway optimization
called train classification: incoming trains are split up into their single
cars and reassembled to form new outgoing trains. Trains are subject
to delay, which may turn a prepared sorting schedule infeasible for the
disturbed situation. The classification methods applied today deal with
this issue by completely disregarding the input order of cars, which provides
robustness against any amount of disturbance but also wastes the
potential contained in the a priori knowledge about the input.
We introduce a new method that provides a feasible sorting schedule for
the expected input and allows to
flexibly insert additional sorting steps
if the schedule has become infeasible after revealing the disturbed input.
By excluding disruptions that almost never occur from our consideration,
we obtain a classification process that is quicker than the current railway
practice but still provides robustness against realistic delays. In fact, our
algorithm allows
flexibly trading off fast classification against high degrees
of robustness depending on the respective need. We further explore
this
flexibility in experiments on real-world traffic data, underlining our
algorithm improves on the methods currently applied in practice.
We consider a basic subproblem which arises in line planning,
and is of particular importance in the context of a high system
load or robustness: How much can be routed maximally along all possible
lines? The essence of this problem is the Path Constrained Network
Flow (PCN) problem. We explore the complexity of this problem and
its dual. In particular we show for the primal that it is as hard to
approximate as MAX CLIQUE and for the dual that it is as hard to
approximate as SET COVER. We also prove that the PCN problem is
hard for special graph classes, interesting both from a complexity and
from a practical perspective. Finally, we present a special graph class
for which there is a polynomial-time algorithm.
This paper discusses adaptive finite element methods (AFEMs) for the solution of elliptic eigenvalue problems associated with partial differential operators. An adaptive method based on nodal-patch refinement leads to an asymptotic error reduction property for the computed sequence of simple eigenvalues and eigenfunctions. This justifies the use of the proven saturation property for a class of reliable and efficient hierarchical a posteriori error estimators. Numerical experiments confirm that the saturation property is present even for very coarse meshes for many examples; in other cases the smallness assumption on the initial mesh may be severe.
A theorem on error estimates for smooth nonlinear programming
problems in Banach spaces is proved that can be used to derive
optimal error estimates for optimal control problems. This theorem is applied
to a class of optimal control problems for quasilinear elliptic equations.
The state equation is approximated by a finite element scheme, while different
discretization methods are used for the control functions. The distance of
locally optimal controls to their discrete approximations is estimated.
We provide results on the existence and uniqueness of equilibrium in dynamically incomplete financial markets in discrete time. Our framework allows for heterogeneous agents, unspanned random endowments and convex trading constraints. In the special case where all agents have preferences of the same type and all random endowments are replicable by trading in the financial market we show that a one-fund theorem holds and give an explicit expression for the equilibrium pricing kernel. If the underlying noise is generated by finitely many Bernoulli random walks, the equilibrium dynamics can be described by a system of coupled backward stochastic difference equations, which in the continuous-time limit becomes a multi-dimensional backward stochastic differential equation. If the market is complete in equilibrium, the system of equations decouples, but if not, one needs to keep track of the prices and continuation values of all agents to solve it. As an example we simulate option prices in the presence of stochastic volatility, demand pressure and short-selling constraints.
Single-molecule force spectroscopy has proven to be a powerful tool for studying the kinetic be- havior of biomolecules. Through application of an external force, conformational states with small or transient populations can be stabilized, allowing them to be characterized and the statistics of in- dividual trajectories studied to provide insight into biomolecular folding and function. Because the observed quantity (force or extension) is not necessarily an ideal reaction coordinate, individual ob- servations cannot be uniquely associated with kinetically distinct conformations. While maximum- likelihood schemes such as hidden Markov models have solved this problem for other classes of single-molecule experiments by using temporal information to aid in the inference of a sequence of distinct conformational states, these methods do not give a clear picture of how precisely the model parameters are determined by the data due to instrument noise and finite-sample statistics, both sig- nificant problems in force spectroscopy. We solve this problem through a Bayesian extension that allows the experimental uncertainties to be directly quantified, and build in detailed balance to fur- ther reduce uncertainty through physical constraints. We illustrate the utility of this approach in characterizing the three-state kinetic behavior of an RNA hairpin in a stationary optical trap.
While seemingly straightforward in principle, the reliable estimation of rate constants is seldom easy in practice. Numerous issues, such as the complication of poor reaction coordinates, cause obvious approaches to yield unreliable estimates. When a reliable order parameter is available, the reactive flux theory of Chandler allows the rate constant to be extracted from the plateau region of an appropriate reactive flux correlation function. However, when applied to real data from single- molecule experiments or molecular dynamics simulations, the reactive flux correlation function requires the numerical differentiation of a noisy empirical correlation function, which can result in an unacceptably poor estimate of the rate and pathological dependence on the sampling interval. We present a modified version of this theory which does not require numerical derivatives, allowing rate constants to be robustly estimated from the time-correlation function directly. We illustrate the approach using single-molecule passive force spectroscopy measurements of an RNA hairpin.
Discrete-state Markov (or master equation) models provide a useful simplified representation for
characterizing the long-time statistical evolution of biomolecules in a manner that allows direct
comparison with experiments as well as the elucidation of mechanistic pathways for an inherently
stochastic process. A vital part of meaningful comparison with experiment is the characterization of
the statistical uncertainty in the predicted experimental measurement, which may take the form of
an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following
a perturbation, or the observation of some statistic (such as the correlation function) of the equilib-
rium dynamics of a single molecule. Without meaningful error bars (which arise due to the finite
quantity of data used to construct the model), there is no way to determine whether the deviations
between model and experiment are statistically meaningful. Previous work has demonstrated that
a Bayesian method that enforces microscopic reversibility can be used to characterize the correlated
uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from
molecular simulation data. Here, we extend this approach to include the uncertainty in observables
that are functions of molecular conformation (such as surrogate spectroscopic signals) characteriz-
ing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be
assessed. We test the approach in a simple model system to demonstrate that the computed uncer-
tainties provide a useful indictor of statistical variation, and then apply it to the computation of the
fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both
experiment and simulation.
Dynamical averages based on functionals of dynamical trajectories, such as time-correlation func-
tions, play an important role in determining kinetic or transport properties of matter. At temperatures
of interest, the expectations of these quantities are often dominated by contributions from rare events,
making the precise calculation of these quantities by molecular dynamics simulation difficult. Here,
we present a reweighting method for combining simulations from multiple temperatures (or from
simulated or parallel tempering simulations) to compute an optimal estimate of the dynamical prop-
erties at the temperature of interest without the need to invoke an approximate kinetic model (such as
the Arrhenius law). Continuous and differentiable estimates of these expectations at any temperature
in the sampled range can also be computed, along with an assessment of the associated statistical
uncertainty. For rare events, aggregating data from multiple temperatures can produce an estimate
of the desired precision at greatly reduced computational cost compared with simulations conducted
at a single temperature. Here, we describe use of the method for the canonical (NVT) ensemble us-
ing four common models of dynamics (canonical distribution of Hamiltonian trajectories, Andersen
thermostatting, Langevin, and overdamped Langevin or Brownian dynamics), but it can be applied to
any thermodynamic ensemble provided the ratio of path probabilities at different temperatures can be
computed. To illustrate the method, we compute a time-correlation function for solvated terminally-
blocked alanine peptide across a range of temperatures using trajectories harvested using a modified
parallel tempering protocol.
This paper is concerned with a diffusion model of phase-field type, consisting
of a {parabolic} system of two partial differential equations{,} interpreted as balances
of microforces and microenergy{, for two unknowns: the problem's order parameter $\rho$}
and the chemical potential $\mu$; each equation includes a viscosity term -- respectively, $\varepsilon \,\partial_t\mu$ and $\delta\,\partial_t\rho$ -- with $\varepsilon$ and $\delta$ two positive parameters; the field equations are complemented by Neumann homogeneous boundary conditions and suitable initial conditions. In a recent paper \cite{CGPS3}, we proved that this problem is \wepo\ and investigated the \loti\ \bhv\ of its $(\varepsilon,\delta)-$solutions. Here we discuss the asymptotic limit of the system as $\eps$
tends to $0$. We prove convergence of
$(\varepsilon,\delta)-$solutions to the corresponding solutions for
the case $\eps =0$, whose long-time behavior we characterize; in the
proofs, we employ compactness and monotonicity arguments.
We investigate a distributed optimal control problem for a phase field
model of Cahn-Hilliard type. The model describes two-species phase segregation
on an atomic lattice under the presence of diffusion; it has been introduced recently in
[4], on the basis of the theory developed in [15], and consists of a system of two
highly nonlinearly coupled PDEs. For this reason, standard arguments of optimal control theory do not apply
directly, although the control constraints and the cost functional are of standard type.
We show that the problem admits a solution, and we derive the first-order
necessary conditions of optimality.
We investigate a nonstandard phase field
model of Cahn-Hilliard type. The model, which was introduced in
[16], describes two-species phase segregation and consists of a
system of two highly nonlinearly coupled PDEs. It has been studied
recently in
[5], [6] for the case of homogeneous Neumann
boundary conditions. In this paper, we investigate the case that the
boundary condition for one of the unknowns of the system is of third
kind and nonhomogeneous. For the resulting system, we show
well-posedness, and we study optimal boundary control
problems. Existence of optimal controls is shown, and the first-order
necessary optimality conditions are derived. Owing to the strong
nonlinear couplings in the PDE system, standard arguments of optimal
control theory do not apply directly, although the control constraints
and the cost functional will be of standard type.
A nonlocal quasilinear multi-phase system with nonconstant specific heat and heat conductivity
(2012)
In this paper, we prove the existence
and global boundedness from above for a solution to an
integrodifferential model for nonisothermal multi-phase
transitions under nonhomogeneous third type boundary conditions.
The system couples a quasilinear internal energy balance
ruling the evolution of the absolute temperature with a vectorial
integro-differential inclusion governing the vectorial
phase-parameter dynamics. The specific heat and the heat
conductivity k are allowed to depend both on the order parameter
$\chi$ and on the absolute temperature $\teta$ of the system, and
the convex component of the free energy may or may not be
singular. Uniqueness and continuous data dependence are
also proved under additional assumptions.
Recent research has shown that
in some practically relevant situations like multi-physics flows[11]
divergence-free mixed finite elements may have a significantly
smaller discretization error than standard non-divergence-free
mixed finite elements. In order to judge the overall performance of
divergence-free mixed finite elements, we
investigate linear solvers for the saddle point linear systems arising in $((P_k)^d,P_{k-1}^{disc})$ Scott-Vogelius finite element implementations of the incompressible Navier-Stokes equations. We investigate both direct and iterative solver methods.
Due to discontinuous pressure elements in the case of Scott-Vogelius elements, considerably more solver strategies seem to deliver promising results than in the case of standard mixed finite elements like
Taylor-Hood elements. For direct methods, we extend recent preliminary work using sparse banded solvers on the penalty method formulation to finer meshes, and discuss extensions. For iterative methods, we test augmented Lagrangian and H-LU preconditioners with GMRES, on both full and statically condensed systems.
Several numerical experiments are provided that show these classes of solvers are well suited for use with Scott-Vogelius elements, and could deliver an interesting overall performance in several applications.
A robust implementation of a Dupire type local volatility model is an important issue for every option trading floor. In the present note we provide new analytic insights into the asymptotic behavior of local volatility in the wings. We present a general approximation formula and specialize it to the Heston model, showing that local variance is linear in the wings. This further justifies the choice of certain local volatility parametrizations.
In the paradigm of VON N EUMANN AND M ORGENSTERN, a representation of affine pref-
erences in terms of an expected utility can be obtained under the assumption of weak continu-
ity. Since the weak topology is coarse, this requirement is a priori far from being negligible.
In this work, we replace the assumption of weak continuity by monotonicity. More precisely,
on the space of lotteries on an interval of the real line, it is shown that any affine preference
order which is monotone with respect to the first stochastic order admits a representation in
terms of an expected utility for some nondecreasing utility function. As a consequence, any
affine preference order on the subset of lotteries with compact support, which is monotone
with respect to the second stochastic order, can be represented in terms of an expected util-
ity for some nondecreasing concave utility function. We also provide such representations
for affine preference orders on the subset of those lotteries which fulfill some integrability
conditions. The subtleties of the weak topology are illustrated by some examples.
We consider convex optimization problems with $k$th order stochastic dominance constraints for $k\ge 2$. We discuss distances of random variables that are relevant for the dominance relation and establish quantitative stability results for optimal values and solution sets in terms of a suitably selected probability metrics.Moreover, we provide conditions ensuring that the optimal value function is Hadamard directionally differentiable. Finally, we discuss some implications of the results for empirical (Monte Carlo,
sample average) approximations of dominance constrained optimization models.
Density expansions for hypoelliptic diffusions (X1^,...,X^d) are revisited. In particular, we are interested in density expansions of the projection (X^1_T,...,X^l_T) at time $T>0$, with $l \le d$. Global conditions are found which replace the well-known ”not-in-cutlocus” condition known from heat-kernel asymptotics; cf. G. Ben Arous (88). Our small noise expansion allows for a ”second order” exponential factor. Applications include tail and implied volatility asymptotics in some correlated stochastic volatility models; in particular, we solve a problem left open by A. Gulisashvili and E.M. Stein (2009).
Modelling, parameter identification, and simulation play an important rôle in Systems Biology. In recent years, various software packages have been established for scientific use in both licencing types, open source as well as commercial. Many of these codes are based on inefficient and mathematically outdated algorithms. By introducing the package BioPARKIN recently developed at ZIB, we want to improve this situation significantly. The development of the software BioPARKIN involves long standing mathematical ideas that, however, have not yet entered the field of Systems Biology, as well as new ideas and tools that are particularly important for the analysis of the dynamics of biological networks. BioPARKIN originates from the package PARKIN, written by P.Deuflhard and U.Nowak, that has been applied successfully for parameter identification in physical chemistry for many years.
We study a nonlinear operator defined via minimal supersolutions of backward stochastic differential equations with generators that are monotone in y, convex in z, jointly lower semicontinuous, and bounded below by an affine function of the control variable. We show existence, uniqueness, monotone convergence, Fatou’s Lemma and lower semicontinuity of this functional. We provide a comparison principle for the underlying minimal supersolutions of BSDEs, which we illustrate by maximizing expected exponential utility.
Mathematical modeling of Czochralski type growth processes for semiconductor bulk single crystals
(2012)
This paper deals with the mathematical modeling and simulation of
crystal growth processes by the so-called Czochralski method and related methods,
which are important industrial processes
to grow large
bulk single crystals of semiconductor materials such as, e.g., gallium arsenide
(GaAs) or silicon (Si) from the melt.
In particular, we investigate a recently developed
technology in which traveling magnetic fields are applied in order to
control
the behavior of the turbulent melt flow. Since numerous different physical effects
like electromagnetic fields, turbulent melt flows, high temperatures, heat transfer via
radiation, etc., play an important role in the process, the corresponding mathematical
model leads to an extremely difficult system of initial-boundary value problems for
nonlinearly coupled partial differential equations. In this paper, we describe a mathematical
model that is under use for the simulation of real-life growth scenarios, and we give an overview
of mathematical results and numerical simulations that have been obtained for it in recent years.
Some mathematical problems related to the 2nd order optimal shape of a crystallization interface
(2012)
We consider the problem to optimize the stationary temperature distribution and the equilibrium shape of the solid-liquid interface in a two-phase system subject to a temperature gradient. The interface satisfies the minimization principle of the free energy, while the temperature is solving the heat equation with a radiation boundary conditions at the outer wall. Under the condition that the temperature gradient is uniformly negative in the direction of crystallization, the interface is expected to have a global graph representation. We reformulate this condition as a pointwise constraint on the gradient of the state, and we derive the first order optimality system for a class of objective functionals that account for the second surface derivatives, and for the surface temperature gradient.
Analysis of the second phase of the GMRES convergence for a convection-diffusion model problem
(2012)
It is well konwn that GMRES applied to linear algebraic systems arising from a convection-diffusion model problem that has been discretized by the streamline upwind Petrov-Galerkin (SUPG) method, typically displays two distinct phases of convergence: a slow initial phase followed by a convergence acceleration in the second phase. This paper complements the known results on the length of the initial phase by analyzing how the acceleration in the second phase of convergence is related to the mesh Peclet number and the choice of the stabilization parameter in the SUPG discretization. The analysis is based on some new expressions and bounds for the GMRES residuals, which can be of general interest.
The article investigates the relation between global solutions of hyperbolic balance laws and viscous balance laws on the circle. It is thematically located at the crossroads of hyperbolic and parabolic partial differential equations with one-dimensional space variable and periodic boundary conditions. The two equations are given by:
u_t+f(u)_x=g(u)
and
u_t+f(u)_x=e u_{xx}+g(u).
The main result of the paper corrects a result on the persistence of heteroclinic connections by Fan and Hale from 1995 when viscosity vanishes: The "Connection Lemma" states that a connection can only persist if the zero number of the source state is a multiple of the zero number of the target state. The "Cascading Theorem" then yields convergence of heteroclinic connections to a sequence of heteroclinic connections and stationary solutions in case of non-persistence.
In addition a full description of the connection problem of rotating waves on the parabolic attractor is given.
A novel Finite Element Method (FEM) for the computational simulation in particle reinforced composite materials with many inclusions is presented. It is based on an adapted mesh which consists of triangles and parametric quadrilaterals in 2D. The number of elements and, hence, the number of degrees of freedom are proportional to the number of inclusions. The error of the method is independent of the distance of the neighboring inclusions. While being related to network methods, the approach can tackle more general settings. We present an efficient residual a posteriori error estimator which enables to compute reliable upper and lower error bounds. Several numerical examples illustrate the performance of the method and the error estimator. Moreover, it is demonstrated that the assumption of a lattice structure of inclusions can easily lead to incorrect predictions about material properties.
Particle methods have become indispensible in conformation dynamics to compute transition rates in protein folding, binding processes and molecular design, to mention a few. Conformation dynamics requires at a decomposition of a molecule's position space into metastable conformations. In this paper, we show how this decomposition can be obtained via the design of either ``soft'' or ``hard'' molecular conformations. We show, that the soft approach results in a larger metastabilitiy of the decomposition and is thus more advantegous. This is illustrated by a simulation of Alanine Dipeptide.
Global higher integrability of minimizers of variational problems with mixed boundary conditions
(2012)
We consider integral functionals with densities of p-growth, with respect to gradients, on a Lipschitz domain with mixed boundary conditions. The aim of this paper is to prove that, under uniform estimates within certain classes of p-growth and coercivity assumptions on the density, the minimizers are of higher integrability order, meaning that they belong to the space of first order Sobolev functions with an integrability of order $p+\epsilon$ for a uniform $\epsilon >0$. The results are applied to a model describing damage evolution in a nonlinear elastic body and to a model for shape memory alloys.
We consider discretizations for reaction-diffusion systems with nonlinear
diffusion in two space dimensions. The applied model allows to handle heterogeneous
materials and uses the chemical potentials of the involved species as primary variables.
We propose an implicit Voronoi finite volume discretization on regular Delaunay
meshes that allows to prove uniform, mesh-independent global upper and lower L1
bounds for the chemical potentials. These bounds provide the main step for a convergence
analysis for the full discretized nonlinear evolution problem. The fundamental
ideas are energy estimates, a discrete Moser iteration and the use of discrete
Gagliardo-Nirenberg inequalities. For the proof of the Gagliardo-Nirenberg inequalities
we exploit that the discrete Voronoi finite volume gradient norm in 2d coincides
with the gradient norm of continuous piecewise linear finite elements.
We analyze a remarkable class of centrally symmetric polytopes, the Hansen
polytopes of split graphs. We confirm Kalai's 3^d-conjecture for such polytopes
(they all have at least 3^d nonempty faces) and show that the Hanner polytopes
among them (which have exactly 3^d nonempty faces) correspond to threshold
graphs. Our study produces a new family of Hansen polytopes that have only
3^d+16 nonempty faces.
With an emphasis on generators with quadratic growth in the control variable we consider
measure solutions of BSDE, a solution concept corresponding to the notion of risk neutral
measure in mathematical finance. In terms of measure solutions, solving a BSDE reduces
to martingale representation with respect to an underlying filtration. Measure solutions
related to measures equivalent to the historical one provide classical solutions. We derive
the existence of measure solutions in scenarios in which the generating functions are just
continuous, of at most linear growth in the control variable (corresponding to generators of
at most quadratic growth in the usual sense), and with a random bound in the time parameter
whose stochastic integral is a BMO martingale. Our main tools include a stability property
of sequences of measure solutions, for which a limiting solution is obtained by means of the
weak convergence of measures.
We study the expansion of the eigenfunctions of Schrödinger operators with smooth confinement potentials in Hermite functions; confinement potentials are potentials that become unbounded at infinity. The key result is that such eigenfunctions and all their derivatives decay more rapidly than any exponential function under some mild growth
conditions to the potential and its derivatives. Their expansion in Hermite functions converges therefore very fast, super-algebraically.
The authors propose a recycling MINRES scheme for a solution of subsequent self-adjoint linear systems as appearing, for example, in the Newton process for solving nonlinear equations. Ritz vectors are automatically extracted from one MINRES run and then used for self-adjoint deflation in the next. The method is designed to work with a preconditioner and arbitrary inner products. Numerical experiments with nonlinear Schrödinger equations indicate a substantial decrease in computation time when recycling is used.
We develop a model for the dynamic evolution of default-free and defaultable interest rates in a LIBOR framework. Utilizing the class of affine processes, this model produces positive LIBOR rates and spreads, while the dynamics are analytically tractable under defaultable forward measures. This leads to explicit formulas for CDS spreads, while semi-analytical formulas are derived for other credit derivatives. Finally, we give an application to counterparty risk.
Flows over time generalize classical ``static'' network flows by introducing a temporal dimension. They can thus be used to model non-instantaneous travel times for flow and variation of flow values over time, both of which are crucial characteristics in many real-world routing problems. There exist two different models of flows over time with respect to flow conservation: one where flow might be stored temporarily at intermediate nodes and a stricter model where flow entering an intermediate node must instantaneously progress to the next arc. While the first model is in general easier to handle, the second model is often more realistic since in applications like, e.\,g., road traffic, storage of flow at intermediate nodes is undesired or even prohibited. The main contribution of this paper is a fully polynomial time approximation scheme (FPTAS) for (min-cost) multi-commodity flows over time without intermediate storage. This improves upon the best previously known $(2+\varepsilon)$-approximation algorithm presented 10 years ago by Fleischer and Skutella (IPCO~2002).
Flows over time and generalized flows are two advanced network flow models of utmost importance, as they incorporate two crucial features occurring in numerous real-life networks. Flows over time feature time as a problem dimension and allow to realistically model the fact that commodities (goods, information, etc.) are routed through a network over time. Generalized flows allow for gain/loss factors on the arcs that model physical transformations of a commodity due to leakage, evaporation, breeding, theft, or interest rates. Although the latter effects are usually time-bound, generalized flow models featuring a temporal dimension have never been studied in the literature.
In this paper we introduce the problem of computing a generalized maximum flow over time in networks with both gain factors and transit times on the arcs. While generalized maximum flows and maximum flows over time can be computed efficiently, our combined problem turns out to be NP-hard and even completely non-approximable. A natural special case is given by lossy networks where the loss rate per time unit is identical on all arcs. For this case we present a (practically efficient) FPTAS that also reveals a surprising connection to so-called earliest arrival flows.
We introduce nonsmooth Schur-Newton methods for the solution of the nonlinear discrete saddle-point problems arising from discretized vector-valued Cahn-Hilliard equations with logarithmic and obstacle potentials. The discrete problems are obtained by semi-implicit discretization in time and a first order finite element discretization in space. We incorporate the linear constraints that enforce solutions to stay on the Gibbs simplex using Lagrangian multipliers and prove existence of these multipliers under the assumption of a non-trivial initial condition for the order parameters.
We consider risk-averse formulations of multistage stochastic linear programs. For these formulations, based on convex combinations of spectral risk measures, risk-averse dynamic programming equations can be written. As a result, the Stochastic Dual Dynamic Programming
(SDDP) algorithm can be used to obtain approximations of
the corresponding risk-averse recourse functions. This allows us to define a risk-averse nonanticipative feasible policy for thestochastic linear program. Formulas for the cuts that approximate the recourse functions are given.
We present a unified computational framework for matching 3d geometric objects (points, lines, surfaces, volumes) of highly varying shape. Our approach is based on the Large Deformation Diffeomorphic Metric Mapping (LDDMM) method acting on $m$-currents. After stating an optimization algorithm in the function space of admissible morph generating velocity fields, two innovative aspects in this framework are presented: First, we spatially discretize the velocity field with conforming adaptive finite elements and discuss advantages of this new approach. Secondly, we directly compute the temporal evolution of discrete $m$-current attributes. Several numerical experiments demonstrate the effectiveness of this approach.
We propose a new approach to competitive analysis by introducing the novel concept of online approximation schemes. Such scheme algorithmically constructs an online algorithm with a competitive ratio arbitrarily close to the best possible competitive ratio for any online algorithm. We study the problem of scheduling jobs online to minimize the weighted sum of completion times on parallel, related, and unrelated machines, and we derive both deterministic and randomized algorithms which are almost best possible among all online algorithms of the respective settings. Our method relies on an abstract characterization of online algorithms combined with various simplifications and transformations. We also contribute algorithmic means to compute the actual value of the best possible competitive ratio up to an arbitrary accuracy. This strongly contrasts all previous manually obtained competitiveness results for algorithms and, most importantly, it reduces the search for the optimal competitive ratio to a question that a computer can answer. We believe that our method can also be applied to many other problems and yields a completely new and interesting view on online algorithms.
A capillary surface in a negative gravitational field describes the shape of the surface of a hanging drop in a capillary tube with wetting material on the bottom. Mathematical modeling leads to the volume- and obstacle-constrained minimization of a nonconvex nonlinear energy functional of mean curvature type which is unbounded from below. In 1984 Huisken proved the existence and regularity of local minimizers of this energy under the condition on gravitation being sufficiently weak. We prove convergence of a first order finite element approximation of these minimizers. Numerical results demonstrating the theoretic convergence order are given.
Quasi-Monte Carlo algorithms are studied for designing discrete approximations of two-stage linear stochastic programs. Their integrands are piecewise linear, but neither smooth nor of bounded variation in the sense of Hardy and Krause. We show that under some weak geometric condition on the two-stage model all terms of their
ANOVA decomposition, except the one of highest order, are smooth and, hence, certain Quasi-Monte Carlo algorithms may achieve the optimal rate of convergence $O(n^{-1+\delta})$ with $\delta\in(0,\frac{1}{2})$ and a constant not depending on the dimension if the integrands belong to weighted tensor product Sobolev spaces with properly selected weights. The geometric condition is generically (i.e., almost everywhere) satisfied if the underlying distribution is normal. We also discuss sensitivity
indices and efficient dimensions of two-stage integrands, and suggest a dimension reduction heuristic for such integrands.
Protein-ligand interactions are essential for nearly all biological processes, and yet the bio-
physical mechanism that enables potential binding partners to associate before specific binding
occurs remains poorly understood. Fundamental questions include which factors influence the
formation of protein-ligand encounter complexes, and whether designated association path-
ways exist. In this article we introduce a computational approach to systematically analyze
the complete ensemble of association pathways and to thus investigate these questions. This
approach is employed here to study the binding of a phosphate ion to the Escherichia coli
Phosphate Binding Protein. Various mutants of the protein are considered and their effects
on binding free energy profiles, association rates and association pathway distributions are
quantified. The results reveal the existence of two anion attractors, i.e. regions that initially
attract negatively charged particles and allow them to be efficiently screened for phosphate
which is specifically bound subsequently. Point mutations that affect the charge on these
attractors modulate their attraction strength and speed up association to a factor of 10 of
the diffusion limit and thus change the association pathways of the phosphate ligand. It is
demonstrated that a phosphate that pre-binds to such an attractor neutralizes its attraction
effect to the environment, making the simultaneous association of a second phosphate ion
unlikely. Our study suggests ways how structural properties can be used to tune molecular
association kinetics so as to optimize the efficiency of binding, and highlights the importance
of kinetic properties.
While studies of protein-ligand association have mostly focused on the native complex and its
stability (binding affinity), relatively little attention has been paid on the association process
that precedes the formation of the complex. Here we review approaches to study the kinet-
ics of association and association mechanisms, i.e. the probability distribution of association
pathways. Selected methods are described that allow these properties to be calculated quan-
titatively from simulation models. We summarize some applications of these methods and
finally propose a model mechanism by which proteins may efficiently screen potential ligands
for those that can be natively bound.
In this paper we propose and analyze a new Multiscale Method for solving semi-linear elliptic problems with heterogeneous and highly variable coeffcient functions. For this purpose we construct a generalized finite element basis that spans a low dimensional multiscale space. The basis is assembled by performing localized linear finescale computations in small patches that have a diameter of order H |log(H)| where H is the coarse mesh size. Without any assumptions on the type of the oscillations in the coeffcients, we give a rigorous proof for a linear convergence of the H1-error with respect to the coarse mesh
size. To solve the arising equations, we propose an algorithm that is based on a damped Newton scheme in the multiscale space.
We provide lower estimates for the norm of gradients of Gaussian
distribution functions and apply the results obtained to a special class of
probabilistically constrained optimization problems. In particular, it is shown
how the precision of computing gradients in such problems can be controlled
by the precision of function values for Gaussian distribution functions. Moreover,
a sensitivity result for optimal values with respect to perturbations of the
underlying random vector is derived. It is shown that the so-called maximal
increasing slope of the optimal value with respect to the Kolmogorov distance
between original and perturbed distribution can be estimated explicitly from
the input data of the problem.
This paper deals with the computation of regular coderivatives of
solution maps associated with a frequently arising class of generalized equations.
The constraint sets are given by (not necessarily convex) inequalities,
and we do not assume linear independence of gradients to active constraints.
The achieved results enable us to state several versions of sharp necessary optimality
conditions in optimization problems with equilibria governed by such
generalized equations. The advantages are illustrated by means of examples.
We study minimal supersolutions of backward stochastic differential equations. We show the existence and uniqueness of the minimal supersolution, if the generator is jointly lower semicontinuous, bounded from below by an affine function of the control variable, and satisfies a specific normalization property. Semimartingale convergence is used to establish the main result.
In recent years, substantial progress in shape analysis has been achieved through methods that use the spectra and eigenfunctions of discrete Laplace operators. In this work, we study spectra and eigenfunctions of discrete differential operators that can serve as an alternative to discrete Laplacians for applications in shape analysis. We construct such operators as the Hessians of surface energies or deformation energies. In particular, we design a quadratic energy such that, on the one hand, its Hessian equals the Laplace operator if the surface is a part of the Euclidean plane, and, on the other hand, the Hessian eigenfunctions are sensitive to the extrinsic curvature (e.g. sharp bends) on curved
surfaces. Furthermore, we consider eigenvibrations induced by deformation energies, and we derive a closed form representation for the Hessian (at the rest state of the energy) for a general class of deformation energies. Based on these spectra and eigenmodes, we derive two shape signatures. One that measures the similarity of points on a surface, and another that can be used to identify features of surfaces.
We consider a class of generalized capital asset pricing models in continuous time with a finite number of agents and tradable securities. The securities may not be sufficient to span all sources of uncertainty. If the agents have exponential utility functions and the individual endowments are spanned by the securities, an equilibrium exists and the agents’ optimal trading strategies are constant. Affine processes, and the theory of information-based asset pricing are used to model the endogenous asset price dynamics and the terminal payoff. The derived semi-explicit pricing formulae are applied to numerically analyze the impact of the agents’ risk aversion on the implied volatility of simultaneously-traded European-style options.
We solve Skorokhod's embedding problem for Brownian motion with linear drift $(W_t+ \kappa t)_{t\geq 0}$ by means of techniques of stochastic control theory. The search for a stopping time
$T$ such that the law of $W_T + \kappa T$ coincides with a prescribed law $\mu$ possessing the first
moment is based on solutions of backward stochastic differential equations of quadratic
type. This new approach generalizes an approach by Bass [Bas] of the classical version of
Skorokhod's embedding problem using martingale representation techniques.
We consider backward stochastic differential equations with drivers of quadratic growth (qgBSDE). We prove several statements concerning path regularity and stochastic smooth-
ness of the solution processes of the qgBSDE, in particular we prove an extension of Zhang's path regularity theorem to the quadratic growth setting. We give explicit convergence rates for the difference between the solution of a qgBSDE and its truncation, filling an important gap in numerics for qgBSDE. We give an alternative proof of second order Malliavin differentiability for BSDE with drivers that are Lipschitz continuous (and differentiable), and
then derive the same result for qgBSDE.
We consider a dynamical system described by the differential equation $\dot{Y}_t = -U^'(Y_t)$
with a unique stable point at the origin. We perturb the system by L\'evy noise of
intensity $\varepsilon$, to obtain the stochastic differential equation $dX^\varepsilon_t = -U^'(X^\varepsilon_{t-})dt + \varepsilon dL_t}.
The process $L$ is a symmetric L\'evy process whose jump measure $\nu$ has exponentially
light tails, $\nu([u, \infty))\sim exp(-u^\alpha), \alpha > 0, u \to\infty$. We study the first exit problem for
the trajectories of the solutions of the stochastic differential equation from the interval
$[-1, 1]$. In the small noise limit $\varepsilon\to 0$ we determine the law and the mean value of the
first exit time, to discover an intriguing phase transition at the critical index $\alpha = 1$.
Mathematical modeling often helps to provide a systems perspective on gene regulatory networks. In particular, qualitative approaches are useful when detailed kinetic information is lacking. Multiple methods have been developed that implement qualitative information in different ways, e.g., in purely discrete or hybrid discrete/continuous models. In this paper, we compare the discrete asynchronous logical modeling formalism for gene regulatory networks due to R. Thomas with piecewise affine differential equation models.
We provide a local characterization of the qualitative dynamics of a piecewise affine differential equation model using the discrete dynamics of a corresponding Thomas model. Based on this result, we investigate the consistency of higher-level dynamical properties such as attractor characteristics and reachability. We show that although the two approaches are based on equivalent information, the resulting qualitative dynamics are different. In particular, the dynamics of the piecewise affine differential equation model is not a simple refinement of the dynamics of the Thomas model.
Deformable surface models are often represented as triangular meshes in image segmentation applications. For a fast and easily regularized deformation onto the target object boundary, the vertices of the mesh are commonly moved along line segments (typically surface normals). However, in case of high mesh curvature, these lines may intersect with the target boundary at “non-corresponding” positions, or may not intersect at all. Consequently, certain deformations cannot be achieved. We propose omnidirectional displacements for deformable surfaces (ODDS) to overcome this limitation. ODDS allow each vertex to move not only along a line segment but within a surrounding sphere, and achieve globally optimal deformations subject to local regularization con-
straints. However, allowing a ball-shaped instead of a linear range of motion per vertex significantly increases runtime and memory. To alleviate this drawback, we propose a hybrid approach, fastODDS, with improved runtime and reduced memory requirements. Furthermore, fastODDS can also cope with simultaneous segmentation of multiple objects. We show the theoretical benefits of ODDS with experiments on synthetic data, and evaluate ODDS and fastODDS quantitatively on clinical image data of the mandible and the hip bones. There, we assess both the global segmentation accuracy as well as local accuracy in high curvature regions, such as the tip-shaped mandibular coronoid processes and the ridge-shaped acetabular rims of
the hip bones.
Dynamical fingerprints of macromolecules obtained from experiments often seem to indicate two- or
three state kinetics while simulations typically reveal a more complex picture. Markov state models of
molecular conformational dynamics can be used to predict these dynamical fingerprints and to reconcile
experiment with simulation. This is illustrated on two model systems: a one-dimensional energy surface
and a four-state model of a protein folding equilibrium. We show that (i) there might be no process
which corresponds to our notion of folding, (ii) often the experiment will be insensitive to some of the
processes present in the system, (iii) with a suitable combination the observable and initial conditions in
a relaxation experiment one can selectively measure specific processes. Furthermore, our method can be
used to design experiments such that specific processes appear with large amplitudes. We demonstrate
that for a fluorescence quenching experiment of the MR121-G9-W peptide.
Resolving the apparent gap in complexity between
simulated and measured kinetics of biomolecules
(2012)
Molecular simulations of biomolecules often reveal a complex picture of the their kinetics,
whereas kinetic experiments typically seem to indicate considerably simpler two- or three-state
kinetics. Markov state models (MSM) provide a tool to link between simulation and experi-
ment, and to resolve this apparent contradiction.
When simulating isolated resonators, the application of transparent boundary conditions causes the approximated spectrum to be polluted with spurious solutions. Distinguishing these artificial solutions from solutions with a physical meaning is often difficult and requires a priori knowledge of the spectrum or the expected field distribution of resonant states. We present an implementation of the pole condition that distinguishes between incoming and outgoing waves by the location of the poles of their Laplace transform as transparent boundary condition. This implementation depends on one tuning parameter. We will use the sensitivity of the computed solutions to perturbations of this parameter as a means to identify spurious solutions. To obtain global statements, we will combine this technique with a convergence monitor for the boundary condition.
Global spatial regularity for elasticity models with cracks, contact and other nonsmooth constraints
(2012)
A global higher differentiability result in Besov
spaces is proved for the displacement fields of linear elastic models
with self contact.
Domains with cracks are studied, where nonpenetration
conditions/Signorini conditions are imposed on the crack faces.
It is shown that
in a neighborhood of crack tips (in 2D) or
crack fronts (3D) the displacement fields are
$B^{3/2}_{2,\infty}$ regular.
The proof relies on a difference
quotient argument for the directions tangential to the crack. In order
to obtain the regularity estimates also in the normal direction, an
argument due to
Ebmeyer/Frehse/Kassmann is modified.
The methods are then applied to further examples like
contact problems with nonsmooth rigid foundations, to a model with
Tresca friction and
to minimization problems with
nonsmooth energies and constraints as they occur for instance in the modeling of
shape memory alloys.
Based on Falk's approximation Theorem for variational
inequalities, convergence rates for FE-discretizations of contact
problems are derived relying on the proven regularity properties.
Several numerical examples illustrate the theoretical results.
MIPLIB 2010
(2012)
This paper reports on the fifth version of the Mixed Integer Programming Library. The MIPLIB 2010 is the first MIPLIB release that has been assembled by a large group from academia and from industry, all of whom work in integer programming. There was mutual consent that the concept of the library had to be expanded in order to fulfill the needs of the community. The new version comprises 361 instances sorted into several groups. This includes the main benchmark test set of 87 instances, which are all solvable by today's codes, and also the challenge test set with 164 instances, many of which are currently unsolved. For the first time, we include scripts to run automated tests in a predefined way. Further, there is a solution checker to test the accuracy of provided solutions using exact arithmetic.
In this paper we revisit models for the description of the evolution of crystalline films with anisotropic surface energies.
We prove equivalences of symmetry properties of anisotropic surface energy models commonly used in the literature.
Then we systematically develop a framework for the derivation of surface diffusion models for the self-assembly of quantum dots during Stranski-Krastanov
growth that include surface energies also with large anisotropy as well as the effect of wetting energy,
elastic energy and a randomly perturbed atomic deposition flux.
A linear stability analysis for the resulting sixth-order semilinear evolution equation for the thin film surface shows that that the new model allows
for large anisotropy and gives rise to the formation of anisotropic quantum dots. The nonlinear three-dimensional evolution is investigated via numerical solutions.
These suggest that increasing anisotropy stabilizes the faceted surfaces and may lead to a dramatic slow-down of the coarsening of the dots.
Scalable Frames
(2012)
Tight frames can be characterized as those frames which possess optimal numerical stability properties. In this paper, we consider the question of modifying a general frame to generate a tight frame by rescaling its frame vectors; a process which can also be regarded as perfect preconditioning of a frame by a diagonal operator. A frame is called scalable, if such a diagonal operator exists. We derive various characterizations of scalable frames, thereby including the infinite-dimensional situation. Finally, we provide a geometric interpretation of scalability in terms of conical surfaces.