G. Mathematics of Computing
Refine
Year of publication
Document Type
- ZIB-Report (46)
- Master's Thesis (6)
- Doctoral Thesis (2)
- In Proceedings (1)
Keywords
- Mixed Integer Programming (5)
- Periodic Timetabling (3)
- MINLP (2)
- PDE (2)
- Periodic Event Scheduling Problem (2)
- SDP (2)
- Ubiquity Generator Framework (2)
- adaptive Newton (2)
- finite element method (2)
- large-scale integer programming (2)
Institute
- Mathematical Optimization (34)
- Numerical Mathematics (10)
- Network Optimization (7)
- Mathematical Optimization Methods (5)
- Applied Algorithmic Intelligence Methods (4)
- Computational Medicine (3)
- Computational Molecular Design (3)
- Computational Nano Optics (3)
- Computational Systems Biology (1)
- Energy Network Optimization (1)
Das Thema dieser Arbeit ist ein Volumen-Algorithmus für die Vereinigung von Polytopen. Der Algorithmus basiert auf der Arbeit von Bieri und Nef. Er berechnet das Volumen der Vereinigung von Polytopen mit einem Sweep-Verfahren. Dabei wird eine Hyperebene im Raum verschoben und das Volumen auf der einen Seite der Hyperebene berechnet. Umso weiter die Hyperebene verschobe wird, desto größer ist auch der Halbraum. Unser Algorithmus berechnet das Volumen einer Vereinigung von Polytopen geschnitten mit dem Halbraum der Sweep-Ebene als eine Funktion abhängig von der Veschiebung. Ab einem gewissen Punkt liegt der Körper dabei komplett im Halbraum der Sweep-Ebene und das Volumen bleibt konstant.
Unser Algorithmus unterscheidet sich in zwei Punkten von dem Algorithmus von Bieri und Nef. Erstens funktioniert er nur auf der Vereinigung von Polytopen, wohingegen der Algorithmus von Bieri und Nef für Nef-Polyeder funktioniert. Diese sind eine Verallgemeinerung von Polyedern, die auch die Klasse der Vereinigung von Polytopen umfasst. Für uns ist das allerdings kein Nachteil, da unsere Datensätze zu Vereinigungen von Polytopen führen. Zweitens ist unser Algorithmus in zwei Teile aufgeteilt. Im ersten Teil wird eine Datenstruktur entwickelt, aus der im zweiten Teil zusammen mit einer Richtung die Sweep-Ebenen-Volumenfunktion berechnet wird. Der Großteil der Komplexität liegt im ersten Teil des Algorithmus. Das hat den Vorteil, dass wir die Volumenfunktionen für viele verschiedene Richtungen berechnen können. So können Einblicke in die Struktur des Körpers gewonnen werden.
Der Algorithmus beruht auf zwei verschiedenen Zerlegungsansätzen. Zuerst können wir mit Hilfe von Anordnungen von Hyperebenen eine Vereinigung von Polytopen in ihre Zellen zerlegen. Dabei berufen wir uns auf die Arbeit von Gerstner und Holtz, in der das Konzept der Positionsvektoren eingeführt wird. Diese nutzen wir um die Ecken und ihre benachbarten Zellen zu bestimmen. So erhalten wir eine Zerlegung unserer Vereinigung in Zellen, deren paarweise Schnitte kein Volumen haben. Das zweite Zerlegungskonzept ist die konische Zerlegung, wie sie von Lawrence eingeführt wurde. Mit Hilfe dieser können wir die Indikatorfunktion eines Polytops als die Summe der Indikatorfunktionen seiner Vorwärtskegel schreiben. Die Sweep-Ebenen Volumenfunktionen können dann leicht mit Hilfe einer altbekannten Formel für das Volumen von Simplices berechnet werden.
Optimization models often feature disjunctions of polytopes as
submodels. Such a disjunctive set is initially (at best) relaxed to
its convex hull, which is then refined by branching.
To measure the error of the convex relaxation, the (relative)
difference between the volume of the convex hull and the volume of the
disjunctive set may be used. This requires a method to compute the
volume of the disjunctive set. Naively, this can be done via
inclusion/exclusion and leveraging the existing code for the volume
of polytopes. However, this is often inefficient.
We propose a revised variant of an old algorithm by Bieri and Nef
(1983) for this purpose. The algorithm uses a sweep-plane to
incrementally calculate the volume of the disjunctive set as a
function of the offset parameter of the sweep-plane.
Modern MIP solvers employ dozens of auxiliary algorithmic components to support the branch-and-bound search in finding and improving primal solutions and in strengthening the dual bound.
Typically, all components are tuned to minimize the average running time to prove optimality. In this article, we take a different look at the run of a MIP solver. We argue that the solution process consists of three different phases, namely achieving feasibility, improving the incumbent solution, and proving optimality. We first show that the entire solving process can be improved by adapting the search strategy with respect to the phase-specific aims using different control tunings. Afterwards, we provide criteria to predict the transition between the individual phases and evaluate the performance impact of altering the algorithmic behavior of the MIP solver SCIP at the predicted phase transition points.
Cycle inequalities play an important role in the polyhedral study of the periodic
timetabling problem. We give the first pseudo-polynomial time separation algo-
rithm for cycle inequalities, and we give a rigorous proof for the pseudo-polynomial
time separability of the change-cycle inequalities. Moreover, we provide several
NP-completeness results, indicating that pseudo-polynomial time is best possible.
The efficiency of these cutting planes is demonstrated on real-world instances of the
periodic timetabling problem.
We propose a new coarse-to-fine approach to solve certain linear programs by column generation. The problems that we address contain layers corresponding to different levels of detail, i.e., coarse layers as well as fine layers. These layers are utilized to design
efficient pricing rules. In a nutshell, the method shifts the pricing of a fine linear program to a coarse counterpart. In this way, major decisions are taken in the coarse layer, while minor
details are tackled within the fine layer. We elucidate our methodology by an application to a complex railway rolling stock rotation problem. We provide comprehensive computational results that demonstrate the benefit of this new technique for the solution of large scale problems.
The covering of a graph with (possibly disjoint) connected subgraphs is a fundamental problem in graph theory. In this paper, we study a version to cover a graph's vertices by connected subgraphs subject to lower and upper weight bounds, and propose a column generation approach to dynamically generate feasible and promising subgraphs. Our focus is on the solution of the pricing problem which turns out to be a variant of the NP-hard Maximum Weight Connected Subgraph Problem. We compare different formulations to handle connectivity, and find that a single-commodity flow formulation performs best. This is notable since the respective literature seems to have dismissed this formulation. We improve it to a new coarse-to-fine flow formulation that is theoretically and computationally superior, especially for large instances with many vertices of degree 2 like highway networks, where it provides a speed-up factor of 10 over the non-flow-based formulations. We also propose a preprocessing method that exploits a median property of weight constrained subgraphs, a primal heuristic, and a local search heuristic. In an extensive computational study we evaluate the presented connectivity formulations on different classes of instances, and demonstrate the effectiveness of the proposed enhancements. Their speed-ups essentially multiply to an overall factor of 20. Overall, our approach allows the reliabe solution of instances with several hundreds of nodes in a few minutes. These findings are further corroborated in a comparison to existing districting models on a set of test instances from the literature.
Balanced separators are node sets that split the graph into size bounded components. They find applications in different theoretical and practical problems. In this paper we discuss how to find a minimum set of balanced separators in node weighted graphs. Our contribution is a new and exact algorithm that solves Minimum Balanced Separators by a sequence of Hitting Set problems. The only other exact method appears to be a mixed-integer program (MIP) for the edge weighted case. We adapt this model to node weighted graphs and compare it to our approach on a set of instances, resembling transit networks. It shows that our algorithm is far superior on almost all test instances.
This thesis represents a game-theoretic investigation of the allocation of inspectors in a transportation network, comparing Nash and Stackelberg equilibrium strategies to a strategy in which inspections are conducted proportionally to the traffic volume. It contains specifications for the integration of space and time dependencies and extensive experimental tests for the application on the transportation network of German motorways using real data. Main results are that - although the formulated spot-checking game is not zero-sum - we are able to compute a Nash equilibrium using linear programming and secondly, that experimental results yield that a Nash equilibrium strategy represents a good trade-off for the Stackelberg equilibrium strategy between efficiency of controls and computation time.