Refine
Year of publication
Language
- English (1103)
- German (10)
- Multiple languages (1)
Keywords
- optimal control (27)
- stability (14)
- integer programming (11)
- Stochastic programming (9)
- finite elements (9)
- mixed integer programming (9)
- Hamiltonian matrix (8)
- finite element method (8)
- model reduction (8)
- state constraints (8)
We derive a formula for the backward error of a complex number $\lambda$ when considered as an approximate eigenvalue
of a Hermitian matrix pencil or polynomial with respect to Hermitian perturbations. The same are also obtained for approximate
eigenvalues of matrix pencils and polynomials with related structures like skew-Hermitian, $*$-even and $*$-odd.
Numerical experiments suggest that in many cases there is a significant difference between the backward
errors with respect to perturbations that preserve structure and those with respect to arbitrary perturbations.
Dissipative Hamiltonian (DH) systems are an important concept in energy based modeling of dynamical
systems. One of the major advantages of the DH formulation is that the system encodes system
properties in an algebraic way in the system. Making use of the structure,
it is easy to see that DH systems are stable. In this paper
the question is discussed when a linear constant coefficient DH system is on the boundary
of the region of asymptotic stability, i.e., when it has purely imaginary eigenvalues,
or how much it has to be perturbed to be on this boundary. For unstructured systems this distance to instability (stability radius) is well-understood. In this paper,
explicit formulas for this distance under structure preserving perturbations are determined.
It is also shown (via numerical examples) that under structured perturbations the asymptotical
stability of a DH system is much more robust than for unstructured perturbations, since the
distance can be much larger.
Canonical forms for matrix triples $(A,G,\hat G)$, where
$A$ is arbitrary rectangular and $G$, $\hat G$ are either real symmetric
or skew symmetric, or complex Hermitian or skew Hermitian, are derived.
These forms generalize classical product Schur forms as well as
singular value decompositions.
An new proof for the complex case is given, where there is no need to
distinguish whether $G$ and $\hat G$ are Hermitian or skew Hermitian.
This proof is independent from the results in Bolschakov/Reichstein 1995, where
a similar canonical form has been obtained for the complex case,
and it allows generalization to the real case. Here,
the three cases, i.e., that
$G$ and $\hat G$ are both symmetric, both skew symmetric or one each,
are treated separately.
The long standing problem is discussed of how to deflate the part associated with the eigenvalue infinity in a structured matrix pencil using structure preserving unitary transformations. We derive such a deflation procedure and apply this new technique to symmetric, Hermitian or alternating pencils and in a modified form to (anti)-palindromic pencils. We present a detailed error and perturbation analysis of this and other deflation procedures and demonstrate the properties of the new algorithm with several numerical examples.
The paper provides a structural analysis of the feasible set defined by linear probabilistic constraints. Emphasis is laid on single (individual) probabilistic constraints. A classical convexity result by Van de Panne/Popp and Kataoka is extended to a broader class of distributions and to more general functions of the decision vector. The range of probability levels for which convexity can be expected is exactly identified. Apart from convexity, also nontriviality and compactness of the
feasible set are precisely characterized at the same time. The relation between feasible sets with negative and with nonnegative right-hand side is revealed. Finally, an existence result is formulated for the more difficult case of joint probabilistic constraints.
Actin is a major structural protein of the eukaryotic cytoskeleton and enables cell motility.
Here, we present a model of the actin filament (F-actin) that incorporates the global structure
of the recently published model by Oda et al. but also conserves internal stereochemistry. A
comparison is made using molecular dynamics simulation of the model with other recent F-
actin models. A number of structural determents such as the protomer propeller angle, the
number of hydrogen bonds and the structural variation among the protomers are analyzed.
The MD comparison is found to reflect the evolution in quality of actin models over the last
six years. In addition, simulations of the model are carried out in states with both ADP or
ATP bound and local hydrogen-bonding differences characterized. The results point to the
significance of a direct interaction of Gln137 with ATP for activation of ATPase activity after
the G-to-F-actin transition.
Diffusion Weighted Imaging has become and will certainly continue to be an important tool in medical research and diagnostics. Data obtained with Diffusion Weighted Imaging are characterized by a high noise level. Thus, estimation of quantities like anisotropy indices or the main diffusion direction may be significantly compromised by noise in clinical or neuroscience applications. Here, we present a new package dti for R, which provides functions for the analysis of diffusion weighted data within the diffusion tensor model. This includes smoothing by a recently proposed structural adaptive smoothing procedure based on the Propagation-Separation approach in the context of the widely used Diffusion Tensor Model. We extend the procedure and show, how a correction for Rician bias can be incorporated. We use a heteroscedastic nonlinear regression model to estimate the diffusion tensor. The smoothing procedure naturally adapts to different structures of different size and thus avoids oversmoothing edges and fine structures. We illustrate the usage and capabilities of the package through some examples.
Functional Magnetic Resonance Imaging inherently involves noisy measurements and a severe multiple test
problem. Smoothing is usually used to reduce the effective number of multiple
comparisons and to locally integrate the signal and hence increase the
signal-to-noise ratio. Here, we provide a new structural adaptive segmentation
algorithm (AS)
that naturally combines the signal detection with noise reduction in one procedure.
Moreover, the new method
is closely related to a recently proposed structural adaptive smoothing
algorithm and preserves shape and spatial extent of activation areas without
blurring the borders.
In this paper, we consider the characterization of strong stationary solutions to
equilibrium problems with equilibrium constraints (EPECs). Assuming that the underlying
generalized equation satisfies strong regularity in the sense of Robinson, an explicit
multiplier-based stationarity condition can be derived. This is applied then
to an equilibrium model arising from ISO-regulated electricity spot markets.
Research on flows over time has been conducted mainly in two separate and mainly independent approaches, namely \emph{discrete} and \emph{continuous} models, depending on whether a discrete or continuous representation of time is used. Recently, Borel flows have been introduced to build a bridge between these two models.
In this paper, we consider the maximum Borel flow problem formulated in a network where capacities on arcs are given as Borel measures and storage might be allowed at the nodes of the network. This problem is formulated as a linear program in a space of measures. We define a dual problem and prove a strong duality result. We show that strong duality is closely related to a MaxFlow-MinCut Theorem.
We present a novel algorithm for automatic parameterization of tube-like surfaces of arbitrary genus such as the surfaces of knots, trees, blood vessels, neurons, or any tubular graph with a globally consistent stripe texture. We use the principal curvature frame field of the underlying tube-like surface to guide the creation of a global, topologically consistent stripe parameterization of the surface. Our algorithm extends the QuadCover algorithm and is based, first, on the use of so-called projective vector fields instead of frame fields, and second, on different types of branch points. That does not only simplify the mathematical theory, but also reduces computation time by the decomposition of the underlying stiffness matrices.
In the first part of this article, we have shown how time-dependent optimal control for partial
differential equations can be realized in a modern high-level modeling and simulation package. In this second part we extend our approach to (state) constrained problems. "Pure" state constraints in a function space
setting lead to non-regular Lagrange multipliers (if they exist), i.e. the Lagrange multipliers are in general Borel
measures. This will be overcome by different regularization techniques.
To implement inequality constraints, active set methods and interior point methods (or barrier methods) are widely in use. We show how these techniques can be realized in the modeling and simulation package Comsol
Multiphysics.
In contrast to the first part, only the one-shot-approach based on space-time elements is considered. We implemented a projection method based on active sets as well as a barrier method and compare these methods
by a specialized PDE optimization program, and a program that optimizes the discrete version of the given problem.
We show how time-dependent optimal control for partial differential equations can be realized in a modern high-level modeling and simulation package. We summarize the general formulation for distributed and boundary control for initial-boundary value problems for parabolic PDEs and derive the optimality system including the adjoint equation. The main difficulty therein is that the latter has to be integrated backwards in time. This implies that complicated implementation effort is necessary to couple state and adjoint equations to compute an optimal solution. Furthermore a large amount of computational effort or storage is required to provide the needed information (i.e the trajectories) of the state and adjoint variables. We show how this can be realized in the modeling and simulation package COMSOL MULTIPHYSICS, taking advantage of built-in discretization, solver and post-processing technologies and thus minimizing the implementation effort. We present two strategies: The treatment of the coupled optimality system in the space-time cylinder, and the iterative approach by sequentially solving state and adjoint system and updating the controls. Numerical examples show the elegance of the implementation and the efficiency of the two strategies.
Boolean modeling frameworks have long since proved their worth for capturing and analyzing essential characteristics of complex systems.
Hybrid approaches aim at exploiting the advantages of Boolean formalisms while refining expressiveness. In this paper, we present a formalism that augments Boolean models with stochastic aspects. More specifically, biological reactions effecting a system in a given state are associated
with probabilities, resulting in dynamical behavior represented as a Markov chain. Using this approach, we model and analyze the cytokinin
response network of Arabidopsis thaliana with a focus on clarifying the character of an important feedback mechanism.
We present a mixed-integer multistage stochastic programming model for the short term unit commitment of a hydro-thermal power system under uncertainty in load, inflow to reservoirs, and prices for fuel and delivery contracts. The model is implemented for uncertain load and tested on realistic data from a German power utility. Load scenario trees are generated by a procedure consisting of two steps: (i) Simulation of load scenarios using an explicit respresentation of the load distribution and (ii) construction of a tree out of these scenarios. The dimension of the corresponding mixed-integer programs ranges up to 200,000 binary and 350,000 continuous variables. The model is solved by a Lagrangian-based decomposition strategy exploiting the loose coupling structure. Solving the Lagrangian dual by a proximal bundle method leads to a successive decomposition into single unit subproblems, which are solved by specific algorithms. Finally, Lagrangian heuristics are used to construct nearly optimal first stage decisions.
We consider the preemptive and non-preemptive problems of scheduling jobs with precedence constraints on parallel machines with the
objective to minimize the sum of~(weighted) completion times. We investigate an online model in which the scheduler learns about a
job when all its predecessors have completed. For scheduling on a single machine, we show matching lower and upper bounds of~$\Theta(n)$ and~$\Theta(\sqrt{n})$ for jobs with general and equal weights, respectively. We also derive corresponding results on parallel machines.
Our result for arbitrary job weights holds even in the more general stochastic online scheduling model where, in addition to the limited information about the job set, processing times are uncertain. For a
large class of processing time distributions, we derive also an improved performance guarantee if weights are equal.
We consider a non-preemptive, stochastic parallel machine
scheduling model with the goal to minimize the weighted completion
times of jobs. In contrast to the classical stochastic model where jobs
with their processing time distributions are known beforehand, we assume
that jobs appear one by one, and every job must be assigned
to a machine online. We propose a simple online scheduling policy for
that model, and prove a performance guarantee that matches the currently
best known performance guarantee for stochastic parallel machine
scheduling. For the more general model with job release dates we derive
an analogous result, and for NBUE distributed processing times we
even improve upon the previously best known performance guarantee for
stochastic parallel machine scheduling. Moreover, we derive some lower
bounds on approximation.
We consider empirical approximations of two-stage stochastic mixed-integer linear programs and derive central
limit theorems for the objectives and optimal values. The limit theorems are based on empirical process theory
and the functional delta method. We also show how these limit theorems can be used to derive confidence intervals
for optimal values via a certain modification of the bootstrapping method.
We analyze an interactive model of credit ratings where external shocks, initially
affecting only a small number of firms, spread by a contagious chain reaction to the
entire economy. Counterparty relationships along with discrete adjustments of credit
ratings generate a transition mechanism that allows the financial distress of one firm
to spill over to its business partners. Such a contagious infectious of financial distress
constitutes a source of intrinsic risk for large portfolios of credit sensitive securities that
cannot be “diversified away.” We provide a complete characterization of the fluctuations
of credit ratings in large economies when adjustments follow a threshold rule. We also
analyze the effects of downgrading cascades on aggregate losses of credit portfolios. We
show that the loss distribution has a power-law tail if the interaction between different
companies is strong enough.
Stochastic Optimization of Electricity Portfolios: Scenario Tree Modeling and Risk Management
(2008)
We present recent developments in the field of stochastic programming with regard to application in power management. In particular we discuss issues of scenario tree modeling, i.e., appropriate discrete approximations of the underlying stochastic parameters. Moreover, we suggest risk avoidance strategies via the incorporation of
so-called polyhedral risk functionals into stochastic programs. This approach, motivated through tractability of the resulting problems, is a constructive framework providing particular flexibility with respect to the dynamic aspects of risk.