ZIB-Report
Refine
Year of publication
- 2018 (56) (remove)
Document Type
- ZIB-Report (55)
- Article (1)
Has Fulltext
- yes (56)
Keywords
Institute
- Mathematical Optimization (37)
- Visual Data Analysis (12)
- Visual and Data-centric Computing (12)
- Mathematical Optimization Methods (10)
- Energy Network Optimization (7)
- Numerical Mathematics (6)
- Image Analysis in Biology and Materials Science (3)
- Mathematics of Transportation and Logistics (3)
- Therapy Planning (3)
- Visual Data Analysis in Science and Engineering (3)
- Mathematics of Health Care (2)
- Computational Medicine (1)
- Digital Data and Information for Society, Science, and Culture (1)
- Geometric Data Analysis and Processing (1)
18-03
We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations.
18-05
The Steiner tree problem in graphs is a classical problem that commonly arises in practical applications as one of many variants. Although the different Steiner tree problem variants are usually strongly related, solution approaches employed so far have been prevalently problem-specific. Against this backdrop, the solver SCIP-Jack was created as a general-purpose framework that can be used to solve the classical Steiner tree problem and 11 of its variants. This versatility is achieved by transforming various problem variants into a general form and solving them by using a state-of-the-art MIP-framework. Furthermore, SCIP-Jack includes various newly developed algorithmic components such as preprocessing routines and heuristics. The result is a high-performance solver that can be employed in massively parallel environments and is capable of solving previously unsolved instances. After the introduction of SCIP-Jack at the 2014 DIMACS Challenge on Steiner problems, the overall performance of the solver has considerably improved. This article provides an overview on the current state.
18-09
Merging criteria for the definition of a local pore and the CSD computation of granular materials
(2018)
18-10
Mathematical models for bioregulatory networks can be based on different formalisms, depending on the quality of available data and the research question to be answered. Discrete boolean models can be constructed based on qualitative data, which are frequently available. On the other hand, continuous models in terms of ordinary differential equations (ODEs) can incorporate time-series data and give more detailed insight into the dynamics of the underlying system. A few years ago, a method based on multivariate polynomial interpolation and Hill functions has been developed for an automatic conversion of boolean models to systems of ordinary differential equations. This method is frequently used by modellers in systems biology today, but there are only a few results available about the conservation of mathematical structures and properties across the formalisms. Here, we consider subsets of the phase space where some components stay fixed, called trap spaces, and demonstrate how boolean trap spaces can be linked to invariant sets in the continuous state space. This knowledge is of practical relevance since finding trap spaces in the boolean setting, which is relatively easy, allows for the construction of reduced ODE models.
18-07
Mitotic and meiotic spindles are microtubule-based structures to faithfully segregate chromosomes. Electron tomography is currently the method of choice to analyze the three-dimensional architecture of both types of spindles. Over the years, we have developed methods and software for automatic segmentation and stitching of microtubules in serial sections for large-scale reconstructions. Three-dimensional reconstruction of microtubules, however, is only the first step towards biological insight. The second step is the analysis of the structural data to derive measurable spindle properties. Here, we present a comprehensive set of techniques to quantify spindle parameters. These techniques provide quantitative analyses of specific microtubule classes and are applicable to a variety of tomographic reconstructions of spindles from different organisms.
18-08
Estimation of time of death based on a single measurement of body
core temperature is a standard procedure in forensic medicine.
Mechanistic models using simulation of heat transport promise
higher accuracy than established phenomenological models in
particular in nonstandard situations, but involve many not exactly
known physical parameters. Identifying both time of death and
physical parameters from multiple temperature measurements is
one possibility to reduce the uncertainty significantly.
In this paper, we consider the inverse problem in a Bayesian setting
and perform both local and sampling-based uncertainty
quantification, where proper orthogonal decomposition is used as
model reduction for fast solution of the forward model. Based on
the local uncertainty quantification, optimal design of experiments
is performed in order to minimize the uncertainty in the time of
death estimate for a given number of measurements. For reasons
of practicability, temperature acquisition points are selected from
a set of candidates in different spatial and temporal locations.
Applied to a real corpse model, a significant accuracy improvement
is obtained already with a small number of measurements.
18-61
The perfect matching polytope, i.e. the convex hull of (incidence vectors of) perfect matchings of a graph is used in many combinatorial algorithms. Kotzig, Lovász and Plummer developed a decomposition theory for graphs with perfect matchings and their corresponding polytopes known as the tight cut decomposition which breaks down every graph into a number of indecomposable graphs, so called bricks. For many properties that are of interest on graphs with perfect matchings, including the description of the perfect matching polytope, it suffices to consider these bricks. A key result by Lovász on the tight cut decomposition is that the list of bricks obtained is the same independent of the choice of tight cuts made during the tight cut decomposition procedure. This implies that finding a tight cut decomposition is polynomial time equivalent to finding a single tight cut.
We generalise the notions of a tight cut, a tight cut contraction and a tight cut decomposition to hypergraphs. By providing an example, we show that the outcome of the tight cut decomposition on general hypergraphs is no longer unique. However, we are able to prove that the uniqueness of the tight cut decomposition is preserved on a slight generalisation of uniform hypergraphs. Moreover, we show how the tight cut decomposition leads to a decomposition of the perfect matching polytope of uniformable hypergraphs and that the recognition problem for tight cuts in uniformable hypergraphs is polynomial time solvable.
18-60
Large Neighborhood Search (LNS) heuristics are among the most powerful but also most expensive heuristics for mixed integer programs (MIP). Ideally, a solver learns adaptively which LNS heuristics work best for the MIP problem at hand in order to concentrate its limited computational budget.
To this end, this work introduces Adaptive Large Neighborhood Search (ALNS) for MIP, a primal heuristic that acts a framework for eight popular LNS heuristics such as Local Branching and Relaxation Induced Neighborhood Search (RINS). We distinguish the available LNS heuristics by their individual search domains, which we call neighborhoods. The decision which neighborhood should be executed is guided by selection strategies for the multi armed bandit problem, a related optimization problem during which suitable actions have to be chosen to maximize a reward function. In this paper, we propose an LNS-specific reward function to learn to distinguish between the available neighborhoods based on successful calls and failures. A second, algorithmic enhancement is a generic variable fixing priorization, which ALNS employs to adjust the subproblem complexity as needed. This is particularly useful for some neighborhoods which do not fix variables by themselves. The proposed primal heuristic has been implemented
within the MIP solver SCIP. An extensive computational study is conducted to compare different LNS strategies within our ALNS framework on a large set of publicly available MIP instances from the MIPLIB and Coral benchmark sets. The results of this simulation are used to calibrate the parameters of the bandit selection strategies. A second computational experiment shows the computational benefits of the proposed ALNS framework within the MIP solver SCIP.
18-17
Mixed integer programming is a versatile and valuable optimization tool. However, solving specific problem instances can be computationally demanding even for cutting-edge solvers. Such long running times are often significantly reduced by an appropriate change of the solver's parameters. In this paper we investigate "algorithm selection", the task of choosing among a set of algorithms the ones that are likely to perform best for a particular instance.
In our case, we treat different parameter settings of the MIP solver SCIP as different algorithms to choose from. Two peculiarities of the MIP solving process have our special attention. We address the well-known problem of performance variability by using multiple random seeds. Besides solving time, primal dual integrals are recorded as a second performance measure in order to distinguish solvers that timed out.
We collected feature and performance data for a large set of publicly available MIP instances. The algorithm selection problem is addressed by several popular, feature-based methods, which have been partly extended for our purpose. Finally, an analysis of the feature space and performance results of the selected algorithms are presented.
18-19
We consider the stochastic extensible bin packing problem (SEBP) in which $n$ items of stochastic size are packed into $m$ bins of unit capacity. In contrast to the classical bin packing problem, bins can be extended at extra cost. This problem plays an important role in stochastic environments such as in surgery scheduling: Patients must be assigned to operating rooms beforehand, such that the regular capacity is fully utilized while the amount of overtime is as small as possible.
This paper focuses on essential ratios between different classes of policies: First, we consider the price of non-splittability, in which we compare the optimal non-anticipatory policy against the optimal fractional assignment policy. We show that this ratio has a tight upper bound of $2$. Moreover, we develop an analysis of a fixed assignment variant of the LEPT rule yielding a tight approximation ratio of $1+1/e \approx 1.368$ under a reasonable assumption on the distributions of job durations.
Furthermore, we prove that the price of fixed assignments, which describes the loss when restricting to fixed assignment policies, is within the same factor. This shows that in some sense, LEPT is the best fixed assignment policy we can hope for.
18-20
Let $G$ be a directed acyclic graph with $n$ arcs, a source $s$ and a sink $t$. We introduce the cone $K$ of flow matrices, which is a polyhedral cone
generated by the matrices $1_P 1_P^T \in R^{n\times n}$, where
$1_P\in R^n$ is the incidence vector of the $(s,t)$-path $P$.
Several combinatorial problems reduce to a linear optimization problem over $K$.
This cone is intractable, but we provide two convergent approximation hierarchies, one of them based on a
completely positive representation of $K$.
We illustrate this approach by computing bounds for a maximum flow problem with pairwise arc-capacities.
18-35
The amazing success of computational mathematical optimization over the last decades has been driven more by insights into mathematical structures than by the advance of computing technology. In this vein, we address applications, where nonconvexity in the model poses principal difficulties.
This paper summarizes the dissertation of Jonas Schweiger for the occasion of the GOR dissertation award 2018. We focus on the work on non-convex quadratic programs and show how problem specific structure can be used to obtain tight relaxations and speed up Branch&Bound methods. Both a classic general QP and the Pooling Problem as an important practical application serve as showcases.
18-13
We investigate new convex relaxations for the pooling problem, a classic nonconvex production planning problem in which products are mixed in intermediate pools in order to meet quality targets at their destinations. In this technical report, we characterize the extreme points of the convex hull of our non-convex set, and show that they are not finite, i.e., the convex hull is not polyhedral. This analysis was used to derive valid nonlinear convex inequalities and show that, for a specific case, they characterize the convex hull of our set. The new valid inequalities and computational results are presented in ZIB Report 18-12.
18-12
We investigate new convex relaxations for the pooling problem, a classic nonconvex production planning problem in which input materials are mixed in intermediate pools, with the outputs of these pools further mixed to make output products meeting given attribute percentage requirements. Our relaxations are derived by considering a set which arises from the formulation by considering a single product, a single attibute, and a single pool. The convex hull of the resulting nonconvex set is not polyhedral. We derive valid linear and convex nonlinear inequalities for the convex hull, and demonstrate that different subsets of these inequalities define the convex hull of the nonconvex set in three cases determined by the parameters of the set. Computational results on literature instances and newly created larger test instances demonstrate that the inequalities can significantly strengthen the convex relaxation of the pq-formulation of the pooling problem, which is the relaxation known to have the strongest bound.
18-14
During the last decades, X-ray (micro-)computed tomography has gained increasing attention for the description of porous skeletal and shell structures of various organism groups. However, their quantitative analysis is often hampered by the difficulty to discriminate cavities and pores within the object from the surrounding region. Herein, we test the ambient occlusion (AO) algorithm and newly implemented optimisations for the segmentation of cavities (implemented in the software Amira). The segmentation accuracy is evaluated as a function of (i) changes in the ray length input variable, and (ii) the usage of AO (scalar) field and other AO-derived (scalar) fields. The results clearly indicate that the AO field itself outperforms all other AO-derived fields in terms of segmentation accuracy and robustness against variations in the ray length input variable. The newly implemented optimisations improved the AO field-based segmentation only slightly, while the segmentations based on the AO-derived fields improved considerably. Additionally, we evaluated the potential of the AO field and AO-derived fields for the separation and classification of cavities as well as skeletal structures by comparing them with commonly used distance-map-based segmentations. For this, we tested the zooid separation within a bryozoan colony, the stereom classification of an ophiuroid tooth, the separation of bioerosion traces within a marble block and the calice (central cavity)-pore separation within a dendrophyllid coral. The obtained results clearly indicate that the ideal input field depends on the three-dimensional morphology of the object of interest. The segmentations based on the AO-derived fields often provided cavity separations and skeleton classifications that were superior to or impossible to obtain with commonly used distance- map-based segmentations. The combined usage of various AO-derived fields by supervised or unsupervised segmentation algorithms might provide a promising target for future research to further improve the results for this kind of high-end data segmentation and classification. Furthermore, the application of the developed segmentation algorithm is not restricted to X-ray (micro-)computed tomographic data but may potentially be useful for the segmentation of 3D volume data from other sources.
18-52
Gene Regulatory Networks are powerful models for describing the mechanisms and dynamics inside a cell. These networks are generally large in dimension and seldom yield analytical formulations. It was shown that studying the conditional expectations between dimensions (vertices or species) of a network could lead to drastic dimension reduction. These conditional expectations were classically given by solving equations of motions derived from the Chemical Master Equation. In this paper we deviate from this convention and take an Algebraic approach instead. That is, we explore the consequences of conditional expectations being described by a polynomial function. There are two main results in this work. Firstly: if the conditional expectation can be described by a polynomial function, then coefficients of this polynomial function can be reconstructed using the classical moments. And secondly: there are dimensions in Gene Regulatory Networks which inherently have conditional expectations with algebraic forms. We demonstrate through examples, that the theory derived in this work can be used to develop new and effective numerical schemes for forward simulation and parameter inference. The algebraic line of investigation of conditional expectations has considerable scope to be applied to many different aspects of Gene Regulatory Networks; this paper serves as a preliminary commentary in this direction.
18-18
In molecular structure analysis and visualization, the molecule’s atoms are often modeled as hard spheres parametrized by their positions and radii. While the atom positions result from experiments or molecular simulations, for the radii typically values are taken from literature. Most often, van der Waals (vdW) radii are used, for which diverse values exist. As a consequence, different visualization and analysis tools use different atomic radii, and the analyses are less objective than often believed. Furthermore, for the geometric accessibility analysis of molecular structures, vdW radii are not well suited. The reason is that during the molecular dynamics simulation, depending on the force field and the kinetic energy in the system, non-bonded atoms can come so close to each other that their vdW spheres intersect. In this paper, we introduce a new kind of atomic radius, called atomic accessibility radius’, that better characterizes the accessibility of an atom in a given molecular trajectory. The new radii reflect the movement possibilities of atoms in the simulated physical system. They are computed by solving a linear program that maximizes the radii of the atoms under the constraint that non-bonded spheres do not intersect in the considered molecular trajectory. Using this data-driven approach, the actual accessibility of atoms can be visualized more precisely.
18-25
The void space of granular materials is generally divided into larger local volumes denoted as pores and throats connecting pores. The smallest section in a throat is usually denoted as constriction. A correct description of pores and constrictions may help to understand the processes related to the transport of fluid or fine particles through granular materials, or to build models of imbibition for unsaturated granular media. In the case of numerical granular materials involving packings of spheres, different methods can be used to compute the pore space properties. However, these methods generally induce an over-segmentation of the pore network and a merging step is usually applied to mitigate such undesirable artifacts even if a precise delineation of a pore is somewhat subjective. This study provides a comparison between different merging criteria for pores in packing of spheres and a discussion about their implication on both the pore size distribution and the constriction size distribution of the material. A correspondence between these merging techniques is eventually proposed as a guide for the user.
18-27
We study the problem of finding subpaths with high demand in a given network that is traversed by several users. The demand of a subpath is the number of users who completely cover this subpath during their trip.
Especially with large instances, an efficient algorithm for computing all subpaths' demands is necessary. We introduce a path-graph to prevent multiple generations of the same subpath and give a recursive approach to compute the demands of all subpaths.
Our runtime analysis shows, that the presented approach compares very well against the
theoretical minimum runtime.
18-59
Given a factorable function f, we propose a procedure that constructs a concave underestimor of f that is tight at a given point. These underestimators can be used to generate intersection cuts. A peculiarity of these underestimators is that they do not rely on a bounded domain. We propose a strengthening procedure for the intersection cuts that exploits the bounds of the domain. Finally, we propose an extension of monoidal strengthening to take advantage of the integrality of the non-basic variables.
18-39
Für den geforderten – und von der Deutschen Forschungsgemeinschaft (DFG) geförderten – Open-Access-Transformationsprozess der deutschen, wissenschaftlichen Publikationslandschaft braucht es neue Formen der Zusammenarbeit zwischen Wissenschaft und Verlagen. Bereits seit 2011 wurden mit Unterstützung seitens der DFG in Deutschland die sogenannten Allianz-Lizenzen zwischen Bibliotheken und Verlagen verhandelt, in denen weitreichende Rechte hinsichtlich der Open-Access-Archivierung verankert sind: Autorinnen und Autoren aber auch die sie vertretenden Einrichtungen dürfen Artikel, die in lizenzierten Zeitschriften erschienen sind, ohne oder mit nur kurzer Embargofrist in geeigneten Repositorien ihrer Wahl frei zugänglich machen. Aufbauend auf diese Open-Access-Komponenten zeigt das DFG-geförderte Projekt „DeepGreen“ ein mögliches neues Modell der Zusammenarbeit mit Verlagen auf: DeepGreen setzt auf die automatisierte Verteilung von Artikeldaten von Verlagen an Repositorien und will disziplinübergreifend einen Großteil jener wissenschaftlichen Publikationen aus Fachzeitschriften, die unter lizenzrechtlichen Kontexten frei zugänglich online gehen dürften, auch tatsächlich online abrufbar machen.
Erprobte DeepGreen von 2016 bis Ende 2017 prototypisch die Machbarkeit der Zielstellung, will das Projekt in der zweiten Projektphase (2018-2020) den (möglichst stark) automatisierten Workflow gemeinsam mit Verlagen, berechtigten Bibliotheken und anderen Einrichtungen etablieren. Technischer Baustein ist eine zentrale, intermediäre Datenverteilstation, die die automatische und rechtssichere Ablieferung von Metadaten inklusive der Volltexte aus Verlagshand direkt an dazu berechtigte institutionelle Repositorien gewährleistet. Erreicht werden soll ein bundesweiter Service, der auf verbindlichen Absprachen mit Verlagen und Bibliotheken fußt und (zunächst) die Bedingungen der Allianz-Lizenzen umsetzt. Gleichzeitig wird die Übertragbarkeit des DeepGreen-Ansatzes auf weitere Lizenzkontexte (FID-Lizenzen, Konsortiallizenzen, Gold-Open-Access-Vereinbarungen) geprüft. Eine zusätzliche Ausbaustufe stellt die Überlegung zur automatisierten Ablieferung an Fachrepositorien und Forschungsinformationssysteme dar, die ebenfalls geplant wird.
Das nationale Projektkonsortium besteht aus den zwei Bibliotheksverbünden Kooperativer Bibliotheksverbund Berlin-Brandenburg (KOBV) und Bibliotheksverbund Bayern (BVB), zwei Universitätsbibliotheken, der Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) und der Technische Universität Berlin (TU Berlin), zusätzlich der Bayerischen Staatsbibliothek (BSB) und einer außeruniversitären Forschungseinrichtung - dem Helmholtz Open Science Koordinationsbüro am Deutschen GeoForschungsZentrum (GFZ).
Das Folgeprojekt beginnt am 01. August 2018. Hier vorliegend finden Sie den Projektantrag zum Nachlesen.
18-31
All feasible flows in potential-driven networks
induce an orientation on the undirected graph underlying the network.
Clearly, these orientations must satisfy two conditions: they are acyclic and there are no "dead ends" in the network, i.e. each source requires outgoing flows, each sink requires incoming flows, and each transhipment vertex requires both an incoming and an outgoing flow. In this paper we will call orientations that satisfy these conditions acyclic source-transhipment-sink orientations (ASTS-orientation) and study their structure. In particular, we characterize graphs that allow for such an orientation, describe a way to enumerate all possible ASTS-orientations of a given graph, present an algorithm to simplify and decompose a graph before such an enumeration and shed light on the role of zero flows in the context of ASTS-orientations.
18-33
The analysis and visualization of nucleic acids (RNA and DNA) play an increasingly important role due to the growing number of known 3-dimensional structures of such molecules. The great complexity of these structures, in particular, those of RNA, demands interactive visualization to get deeper insights into the relationship between the 2D secondary structure motifs and their 3D tertiary structures. Over the last decades, a lot of research in molecular visualization has focused on the visual exploration of protein structures while nucleic acids have only been marginally addressed. In contrast to proteins, which are composed of amino acids, the ingredients of nucleic acids are nucleotides. They form structuring patterns that differ from those of proteins and, hence, also require different visualization and exploration techniques. In order to support interactive exploration of nucleic acids, the computation of secondary structure motifs as well as their visualization in 2D and 3D must be fast. Therefore, in this paper, we focus on the performance of both the computation and visualization of nucleic acid structure. For the first time, we present a ray casting-based visualization of RNA and DNA secondary and tertiary structures, which enables real-time visualization of even large molecular dynamics trajectories. Furthermore, we provide a detailed description of all important aspects to visualize nucleic acid secondary and tertiary structures. With this, we close an important gap in molecular visualization.
18-44
In commodity transport networks such as natural gas, hydrogen and water networks, flows arise from nonlinear potential differences between the nodes, which can be represented by so-called "potential-driven" network models. When operators of these networks face increasing demand or the need to handle more diverse transport situations, they regularly seek to expand the capacity of their network by building new pipelines parallel to existing ones ("looping").
The paper introduces a new mixed-integer non-linear programming (MINLP) model and a new non-linear programming (NLP) model and compares these with existing models for the looping problem and related problems in the literature, both theoretically and experimentally.
On this basis, we give recommendations about the circumstances under which a certain model should be used. In particular, it turns out that one of our novel models outperforms the existing models.
Moreover, the paper is the first to include the practically relevant option that a particular pipeline may be looped several times.
18-47
Dieses Dokument fasst den Stand der mathematischen Modellierung von
Preissystemen des öV mittels eines am ZIB entwickelten Tarifgraphenmodells zusammen. Damit sind sehr einfache und konzise
Beschreibungen von Tarifstrukturen möglich, die sich algorithmisch
behandeln lassen: Durch das zeitgleiche Tracken eines Pfades im
Routinggraphen im Tarifgraphen kann schon während einer Routenberechnung der Preis bestimmt werden. Wir beschreiben
zunächst das Konzept. Die konkrete Realisierung wird im Folgenden
beispielhaft an den Tarifsystemen der Verkehrsverbünde Warnow,
MDV, Vogtland, Bremen/Niedersachsen, Berlin/Brandenburg und Mittelsachsen erläutert. Anschließend folgen Überlegungen zur konkreten Implementierung von Kurzstrecken-Tarifen und zur Behandlung des Verbundübergriffs.
18-53
We show how biologically coherent mesh models of animals can be created from μCT data to generate artificial yet naturally looking intermediate objects. The whole pipeline of processing algorithms is presented, starting from generating topologically equivalent surface meshes, followed by solving the correspondence problem, and, finally, creating a surface morphing. In this pipeline, we address all the challenges that are due to dealing with complex biological, non-isometric objects. For biological objects it is often particularly important to obtain deformations that look as realistic as possible. In addition, spatially non-uniform shape morphings that only change one part of the surface and keep the rest as stable as possible are of interest for evolutionary studies, since functional modules often change independently from one another. We use Poisson interpolation for this purpose and show that it is well suited to generate both global and local shape deformations.
18-11
In 2005 the European Union liberalized the gas market with a disruptive change and decoupled trading of natural gas from its transport. The gas is now transported by independent so-called transmissions system operators or TSOs. The market model established by the European Union views the gas transmission network as a black box, providing shippers (gas traders and consumers) the opportunity to transport gas from any entry to any exit. TSOs are required to offer the maximum possible capacities at each entry and exit such that any resulting gas flow can be realized by the network. The revenue from selling these capacities more than one billion Euro in Germany alone, but overestimating the capacity might compromise the security of supply. Therefore, evaluating the available transport capacities is extremely important to the TSOs.
This is a report on a large project in mathematical optimization, set out to develop a new toolset for evaluating gas network capacities. The goals and the challenges as they occurred in the project are described, as well as the developments and design decisions taken to meet the requirements.
18-49
We establish a general computational framework for Chvátal’s conjecture based on exact rational integer programming. As a result we prove Chvátal’s conjecture holds for all downsets whose union of sets contains seven elements or less. The computational proof relies on an exact branch-and-bound certificate that allows for elementary verification and is independent of the integer programming solver used.
18-04
Quadratic optimization problems (QPs) are ubiquitous, and solution algorithms have matured to a reliable technology. However, the precision of solutions is usually limited due to the underlying floating-point operations. This may cause inconveniences when solutions are used for rigorous reasoning. We contribute on three levels to overcome this issue.
First, we present a novel refinement algorithm to solve QPs to arbitrary precision. It iteratively solves refined QPs, assuming a floating-point QP solver oracle. We prove linear convergence of residuals and primal errors. Second, we provide an efficient implementation, based on SoPlex and qpOASES that is publicly available in source code. Third, we give precise reference solutions for the Maros and Mészáros benchmark library.
18-34
In oocytes of many organisms, meiotic spindles form in the absence of centrosomes [1–5]. Such female meiotic spindles have a pointed appearance in metaphase with microtubules focused at acentrosomal spindle poles. At anaphase, the microtubules of acentrosomal spindles then transition to an inter- chromosomal array, while the spindle poles disappear. This transition is currently not understood. Previous studies have focused on this inter- chromosomal microtubule array and proposed a pushing model to drive chromosome segregation [6, 7]. This model includes an end-on orientation of microtubules with chromosomes. Alternatively, chromosomes were thought to associate along bundles of microtubules [8, 9]. Starting with metaphase, this second model proposed a pure lateral chromosome-to-microtubule association up to the final meiotic stages of anaphase. Here we applied large-scale electron tomography [10] of staged C. elegans oocytes in meiosis to analyze the orientation of microtubules in respect to chromosomes. We show that microtubules at metaphase I are primarily oriented laterally to the chromosomes and that microtubules switch to an end-on orientation during progression through anaphase. We further show that this switch in microtubule orientation involves a kinesin-13 microtubule depolymerase, KLP-7, which removes laterally associated microtubules around chromosomes. From this we conclude that both lateral and end-on modes of microtubule-to-chromosome orientations are successively used in C. elegans oocytes to segregate meiotic chromosomes.
18-54
The paper investigates the efficient use of a linearly implicit stiff integrator for the numerical solution of density driven flow problems. Upon choosing a one-step method of extrapolation type (code LIMEX), the use of full Jacobians and reduced approximations are discussed. Numerical experiments include nonlinear density flow problems such as diffusion from a salt dome (2D), a (modified) Elder problem (3D), the saltpool benchmark (3D) and a real life salt dome problem (2D). The arising linear equations are solved using either a multigrid preconditioner from the software package UG4 or the sparse matrix solver SuperLU.
18-15
Abstract: Objective: To present a novel method for automated segmentation of knee menisci from MRIs. To evaluate quantitative meniscal biomarkers for osteoarthritis (OA) estimated thereof. Method: A segmentation method employing convolutional neural networks in combination with statistical shape models was developed. Accuracy was evaluated on 88 manual segmentations. Meniscal volume, tibial coverage, and meniscal extrusion were computed and tested for differences between groups of OA, joint space narrowing (JSN), and WOMAC pain. Correlation between computed meniscal extrusion and MOAKS experts' readings was evaluated for 600 subjects. Suitability of biomarkers for predicting incident radiographic OA from baseline to 24 months was tested on a group of 552 patients (184 incident OA, 386 controls) by performing conditional logistic regression. Results: Segmentation accuracy measured as Dice Similarity Coefficient was 83.8% for medial menisci (MM) and 88.9% for lateral menisci (LM) at baseline, and 83.1% and 88.3% at 12-month follow-up. Medial tibial coverage was significantly lower for arthritic cases compared to non-arthritic ones. Medial meniscal extrusion was significantly higher for arthritic knees. A moderate correlation between automatically computed medial meniscal extrusion and experts' readings was found (ρ=0.44). Mean medial meniscal extrusion was significantly greater for incident OA cases compared to controls (1.16±0.93 mm vs. 0.83±0.92 mm; p<0.05). Conclusion: Especially for medial menisci an excellent segmentation accuracy was achieved. Our meniscal biomarkers were validated by comparison to experts' readings as well as analysis of differences w.r.t groups of OA, JSN, and WOMAC pain. It was confirmed that medial meniscal extrusion is a predictor for incident OA.
18-30
Improving relaxations for potential-driven network flow problems via acyclic flow orientations
(2018)
The class of potential-driven network flow problems provides important models for a range of infrastructure networks. For real-world applications, they need to be combined with integer
models for switching certain network elements, giving rise to hard-to-solve MINLPs. We observe that on large-scale real-world meshed networks the usually employed relaxations are rather weak due to cycles in the network.
We propose acyclic flow orientations as a combinatorial relaxation of feasible solutions of potential-driven flow problems and show how they can be used to strengthen existing relaxations. First computational results indicate that the strengthend model is much tighter than the original relaxation, thus promising a computational advantage.
17-68
Improving branching for disjunctive polyhedral models using approximate convex decompositions
(2018)
Disjunctive sets arise in a variety of optimization models and much esearch has been devoted to obtain strong relaxations for them. This paper focuses on the evaluation of the relaxation during the branch-and-bound search process. We argue that the branching possibilities (\ie binary variables) of the usual formulations are unsuitable to obtain strong bounds early in the search process as they do not capture the overall shape of the the entire disjunctive set. To analyze and exploit the shape of the disjunctive set we propose to compute a hierarchy of approximate convex decompositions and show how to extend the known formulations to obtain improved branching behavior.
17-65
We consider the modeling of operation modes for complex compressor stations (i.e., ones with several in- or outlets) in gas networks. In particular, we propose a refined model that allows to precompute tighter relaxations for each operation mode. These relaxations may be used to strengthen the compressor station submodels in gas network optimization problems. We provide a procedure to obtain the refined model from the input data for the original model. This procedure is based on a nontrivial reduction of the graph representing the gas flow through the compressor station in an operation mode.
18-50
A great amount of material properties is strongly influenced by dislocations, the carriers of plastic deformation. It is therefore paramount to have appropriate tools to quantify dislocation substructures with regard to their features, e.g., dislocation density, Burgers vectors or line direction. While the transmission electron microscope (TEM) has been the most widely-used equipment implemented to investigate dislocations, it usually is limited to the two-dimensional (2D) observation of three-dimensional (3D) structures. We reconstruct, visualize and quantify 3D dislocation substructure models from only two TEM images (stereo-pairs) and assess the results. The reconstruction is based on the manual interactive tracing of filiform objects on both images of the stereo-pair. The reconstruction and quantification method are demonstrated on dark field (DF) scanning (S)TEM micrographs of dislocation substructures imaged under diffraction contrast conditions. For this purpose, thick regions (> 300 nm) of TEM foils are analyzed, which are extracted from a Ni-base superalloy single crystal after high temperature creep deformation. It is shown how the method allows 3D quantification from stereo-pairs in a wide range of tilt conditions, achieving line length and orientation uncertainties of 3 % and 7°, respectively. Parameters that affect the quality of such reconstructions are discussed.
18-42
For Kendall’s shape space we determine analytically Jacobi fields and parallel transport, and compute geodesic regression. Using the derived expressions, we can fully leverage the geometry via Riemannian optimization and reduce the computational expense by several orders of magnitude. The methodology is demonstrated by performing a longitudinal statistical analysis of epidemiological shape data.
As application example we have chosen 3D shapes of knee bones, reconstructed from image data of the Osteoarthritis Initiative. Comparing subject groups with incident and developing osteoarthritis versus normal controls, we find clear differences in the temporal development of femur shapes. This paves the way for early prediction of incident knee osteoarthritis, using geometry data only.
18-45
18-38
A Simple Way to Compute the Number of Vehicles That Are Required to Operate a Periodic Timetable
(2018)
We consider the following planning problem in public transportation: Given a
periodic timetable, how many vehicles are required to operate it?
In [9], for this sequential approach, it is proposed to first expand the periodic
timetable over time, and then answer the above question by solving a flow-based
aperiodic optimization problem.
In this contribution we propose to keep the compact periodic representation of
the timetable and simply solve a particular perfect matching problem. For practical
networks, it is very much likely that the matching problem decomposes into several
connected components. Our key observation is that there is no need to change any
turnaround decision for the vehicles of a line during the day, as long as the timetable
stays exactly the same.
18-51
Calculation of clinch and elimination numbers for sports leagues with multiple tiebreaking criteria
(2018)
The clinch (elimination) number is a minimal number of future wins (losses) needed to clinch (to be eliminated from) a specified place in a sports league. Several optimization models and computational results are shown in this paper for calculating clinch and elimination numbers in the presence of predefined multiple tiebreaking criteria. The main subject of this paper is to provide a general algorithmic framework based on integer programming with utilizing possibly multilayered upper and lower bounds.
17-79
A Polyhedral Study of Event-Based Models for the Resource-Constrained Project Scheduling Problem
(2018)
We consider event-based Mixed-Integer Programming (MIP) models for the Resource-Constrained Project Scheduling Problem (RCPSP) that represent an alternative to the common time-indexed model (DDT) of Pritsker et al. (1969) for the case where the underlying time horizon is large or job processing times are subject to huge variations. In contrast to the time-indexed model, the size of event-based models does not depend on the time horizon. For two event-based formulations OOE and SEE of Koné et al. (2011) we present new valid inequalities that dominate the original formulation. Additionally, we introduce a new event-based model: the Interval Event-Based Model (IEE). We deduce linear transformations between all three models that yield the strict domination order IEE > SEE > OOE for their linear programming (LP) relaxations, meaning that IEE has the strongest linear relaxation among the event-based models. We further show that the popular DDT formulation can be retrieved from IEE by certain polyhedral operations, thus giving a unifying view on a complete branch of MIP formulations for the RCPSP. In addition, we analyze the computational performance of all presented models on test instances of the PSPLIB (Kolisch and Sprecher 1997).
18-29
We consider the Cumulative Scheduling Problem (CuSP) in which a set of $n$ jobs must be scheduled according to release dates, due dates and cumulative resource constraints. In constraint programming, the CuSP is modeled as the cumulative constraint. Among the most common propagation algorithms for the CuSP there is energetic reasoning (Baptiste et al., 1999) with a complexity of O(n^3) and edge-finding (Vilim, 2009) with O(kn log n) where k <= n is the number of different resource demands. We consider the complete versions of the propagators that perform all deductions in one call of the algorithm. In this paper, we introduce the energetic edge-finding rule that is a generalization of both energetic reasoning and edge-finding. Our main result is a complete energetic edge-finding algorithm with a complexity of O(n^2 log n) which improves upon the complexity of energetic reasoning. Moreover, we show that a relaxation of energetic edge-finding with a complexity of O(n^2) subsumes edge-finding while performing stronger propagations from energetic reasoning. A further result shows that energetic edge-finding reaches its fixpoint in strongly polynomial time. Our main insight is that energetic schedules can be interpreted as a single machine scheduling problem from which we deduce a monotonicity property that is exploited in the algorithms. Hence, our algorithms improve upon the strength and the complexity of energetic reasoning and edge-finding whose complexity status seemed widely untouchable for the last decades.
17-67
Gas networks are an important application area for optimization. When considering long-range transmission, compressor stations play a crucial role in these applications. The purpose of this report is to collect and systematize the models used for compressor stations in the literature. The emphasis is on recent work on simple yet accurate polyhedral models that may replace more simplified traditional models without increasing model complexity. The report also describes an extension of the compressor station data available in GasLib (http://gaslib.zib.de/) with the parameters of these models.
18-58
SCIP-JACK is a customized, branch-and-cut based solver for Steiner tree and related problems. ug [SCIP-JACK, MPI] extends SCIP-JACK to a massively par- allel solver by using the Ubiquity Generator (UG) framework. ug [SCIP-JACK, MPI] was the only solver that could run on a distributed environment at the (latest) 11th DIMACS Challenge in 2014. Furthermore, it could solve three well-known open instances and updated 14 best known solutions to instances from the bench- mark libary STEINLIB. After the DIMACS Challenge, SCIP-JACK has been con- siderably improved. However, the improvements were not reflected on ug [SCIP- JACK, MPI]. This paper describes an updated version of ug [SCIP-JACK, MPI], especially branching on constrains and a customized racing ramp-up. Furthermore, the different stages of the solution process on a supercomputer are described in detail. We also show the latest results on open instances from the STEINLIB.
18-23
In this article we analyze a generalized trapezoidal rule for initial value problems with piecewise smooth right hand side F:IR^n -> IR^n.
When applied to such a problem, the classical trapezoidal rule suffers from a loss of accuracy if the solution trajectory intersects a non-differentiability of F. In such a situation the investigated generalized trapezoidal rule achieves a higher convergence order than the classical method. While the asymptotic behavior of the generalized method was investigated in a previous work, in the present article we develop the algorithmic structure for efficient implementation strategies
and estimate the actual computational cost of the latter.
Moreover, energy preservation of the generalized trapezoidal rule is proved for Hamiltonian systems with piecewise linear right hand side.
18-43
Recent research has shown that piecewise smooth (PS) functions can be approximated by piecewise linear functions with second order error in the distance to
a given reference point. A semismooth Newton type algorithm based on successive application of these piecewise linearizations was subsequently developed
for the solution of PS equation systems. For local bijectivity of the linearization
at a root, a radius of quadratic convergence was explicitly calculated in terms
of local Lipschitz constants of the underlying PS function. In the present work
we relax the criterium of local bijectivity of the linearization to local openness.
For this purpose a weak implicit function theorem is proved via local mapping
degree theory. It is shown that there exist PS functions f:IR^2 --> IR^2 satisfying the weaker
criterium where every neighborhood of the root of f contains a point x such that
all elements of the Clarke Jacobian at x are singular. In such neighborhoods
the steps of classical semismooth Newton are not defined, which establishes
the new method as an independent algorithm. To further clarify the relation between a PS function and its piecewise linearization,
several statements about structure correspondences between the two are proved.
Moreover, the influence of the specific representation of the local piecewise linear models
on the robustness of our method is studied.
An example application from cardiovascular mathematics is given.
18-24
We present an extension of Taylor’s theorem towards nonsmooth evalua-
tion procedures incorporating absolute value operaions. Evaluations procedures are
computer programs of mathematical functions in closed form expression and al-
low a different treatment of smooth operations and calls to the absolute value value
function. The well known classical Theorem of Taylor defines polynomial approx-
imation of sufficiently smooth functions and is widely used for the derivation and
analysis of numerical integrators for systems of ordinary differential or differential
algebraic equations, for the construction of solvers for the continuous nonlinear op-
timization of finite dimensional objective functions and for root solving of nonlinear
systems of equations. The herein provided proof is construtive and allow efficiently
designed algorithms for the execution and computation of generalized piecewise
polynomial expansions. As a demonstration we will derive a k-step method on the
basis of polynomial interpolation and the proposed generalized expansions.
18-48
Spectral clustering methods are based on solving eigenvalue problems for the identification of clusters, e.g. the identification of metastable subsets of a Markov chain. Usually, real-valued eigenvectors are mandatory for this type of algorithms. The Perron Cluster Analysis (PCCA+) is a well-known spectral clustering method of Markov chains. It is applicable for reversible Markov chains, because reversibility implies a real-valued spectrum. We also extend this spectral clustering method to non-reversible Markov chains and give some illustrative examples. The main idea is to replace the eigenvalue problem by a real-valued Schur decomposition. By this extension non-reversible Markov chains can be analyzed. Furthermore, the chains do not need to have a positive stationary distribution. In addition to metastabilities, dominant cycles and sinks can also be identified. This novel method is called GenPCCA (i.e.
Generalized PCCA), since it includes the case of non reversible processes.
We also apply the method to real world eye tracking data.
18-36
State-of-the-art solvers for mixed integer programs (MIP) govern a variety of algorithmic components. Ideally, the solver adaptively learns to concentrate its computational budget on those components that perform well on a particular problem, especially if they are time consuming.
We focus on three such algorithms, namely the classes of large neighborhood search and diving heuristics as well as Simplex pricing strategies.
For each class we propose a selection strategy that is updated based on the observed runtime behavior, aiming to ultimately select only the best algorithms for a given instance.
We review several common strategies for such a selection scenario under uncertainty, also known as Multi Armed Bandit Problem.
In order to apply those bandit strategies, we carefully design reward functions to rank and compare each individual heuristic or pricing algorithm within its respective class.
Finally, we discuss the computational benefits of using the proposed adaptive selection within the \scip Optimization Suite on publicly available MIP instances.