90-XX OPERATIONS RESEARCH, MATHEMATICAL PROGRAMMING
Refine
Year of publication
Document Type
- ZIB-Report (186)
- Master's Thesis (17)
- Bachelor's Thesis (9)
- Doctoral Thesis (6)
- Article (3)
- In Proceedings (2)
Keywords
Institute
- Mathematical Optimization (163)
- Mathematical Optimization Methods (34)
- Network Optimization (33)
- Applied Algorithmic Intelligence Methods (24)
- AI in Society, Science, and Technology (10)
- Mathematics of Telecommunication (7)
- Mathematics of Transportation and Logistics (6)
- Mathematical Algorithmic Intelligence (5)
- Numerical Mathematics (4)
- ZIB Allgemein (4)
Branching decisions play a crucial role in branch-and-bound algorithms for solving combinatorial optimization problems. In this paper, we investigate several branching rules applied to the Quota Steiner Tree Problem with Interference (QSTPI). The Quota Steiner Tree Problem (QSTP) generalizes the classical Steiner Tree Problem (STP) in graphs by seeking a minimum-cost tree that connects a subset of profit-associated vertices to meet a given quota. The extended version, QSTPI, introduces interference among vertices: Selecting certain vertices simultaneously reduces their individual contributions to the overall profit. This problem arises, for example, in positioning and connecting wind turbines, where turbines possibly shadow other turbines, reducing their energy yield. While exact solvers for standard STP-related problems often rely heavily on reduction techniques and cutting-plane methods – rarely generating large branch-and-bound trees – experiments reveal that large instances of QSTPI require significantly more branching to compute provably optimal solutions. In contrast to branching on variables, we utilize the combinatorial structure of the QSTPI by branching on the graph’s vertices. We adapt classical and problem-specific branching rules and present a comprehensive computational study comparing the effectiveness of these branching strategies.
The expressiveness of energy system optimization models (ESOMs) depends on a multitude of exogenous parameters. For example, sound estimates of the future energy demand are essential to enable qualified decisions on long-term investments. However, the enormous demand fluctuations even on a fine-grained scale diminish the computational performance of large-scale ESOMs. We therefore propose a clustering-and-decomposition method for linear programming based ESOMs that first identifies and solves prototypical demand scenarios with the dual simplex algorithm, and then composes dual optimal prototype bases to a warm-start basis for the full model. We evaluate the feasibility and computational efficiency our approach on a real-world case study, using a sector-coupled ESOM with hourly resolution for the Berlin-Brandenburg area in Germany, based on the oemof framework.
The timetable is a central pillar of any public transportation system. Constructing and optimizing periodic timetables in terms of passenger comfort and operational efficiency leads to NP-hard optimization problems that are also computationally challenging in applications. The Periodic Event Scheduling Problem (PESP) as standard mathematical tool benefits from its succinct formulation and rich combinatorial structure, but suffers from poor linear programming relaxations and weak dual bounds. These difficulties persist in a reduced version, where driving and dwelling activities of the lines are assumed to be fixed. In this case, fixing the initial departure time of each line fully determines the timetable, and for each pair of lines, the resulting (weighted) transfer durations can be expressed in terms of a piecewise linear non-convex function in terms of the difference of the initial times. When the number of activities between two lines is bounded, this function can be computed in polynomial time. By inserting precomputed piecewise linear functions into a mixed-integer program with the initial departure times as variables, we introduce an equivalent formulation for reduced PESP instances. The model bears analogies with quadratic semi-assignment approaches and offers alternative ways to compute primal and dual bounds. We evaluate the computational behavior of our approach on realistic benchmarking instances.
A GPU accelerated variant of Schroeppel-Shamir's algorithm for solving the market split problem
(2025)
The market split problem (MSP), introduced by Cornuéjols and Dawande (1998), is a challenging binary optimization problem that performs poorly on state-of-the-art linear programming-based branch-and-cut solvers. We present a novel algorithm for solving the feasibility version of this problem, derived from Schroeppel–Shamir's algorithm for the one-dimensional subset sum problem. Our approach is based on exhaustively enumerating one-dimensional solutions of MSP and utilizing GPUs to evaluate candidate solutions across the entire problem. The resulting hybrid CPU-GPU implementation efficiently solves instances with up to 10 constraints and 90 variables. We demonstrate the algorithm's performance on benchmark problems, solving instances of size (9, 80) in less than fifteen minutes and (10, 90) in up to one day.
The Periodic Event Scheduling Problem (PESP) is the central mathematical tool for periodic timetable optimization in public transport. PESP can be formulated in several ways as a mixed-integer linear program with typically general integer variables. We investigate the split closure of these formulations and show that split inequalities are identical with the recently introduced flip inequalities. While split inequalities are a general mixed-integer programming technique, flip inequalities are defined in purely combinatorial terms, namely cycles and arc sets of the digraph underlying the PESP instance. It is known that flip inequalities can be separated in pseudo-polynomial time. We prove that this is best possible unless P = NP, but also observe that the complexity becomes linear-time if the cycle defining the flip inequality is fixed. Moreover, introducing mixed-integer-compatible maps, we compare the split closures of different formulations, and show that reformulation or binarization by subdivision do not lead to stronger split closures. Finally, we estimate computationally how much of the optimality gap of the instances of the benchmark library PESPlib can be closed exclusively by split cuts, and provide better dual bounds for five instances.
We investigate the use of low-precision first-order methods (FOMs) within a fix-and-propagate (FP) framework for solving mixed-integer programming problems (MIPs). FOMs, using only matrix-vector products instead of matrix factorizations, are well suited for GPU acceleration and have recently gained more attention for their application to large-scale linear programming problems (LPs). We employ PDLP, a variant of the Primal-Dual Hybrid Gradient (PDHG) method specialized to LP problems, to solve the LP-relaxation of our MIPs to low accuracy. This solution is used to motivate fixings within our fix-and-propagate framework. We implemented four different FP variants using primal and dual LP solution information. We evaluate the performance of our heuristics on MIPLIB 2017, showcasing that the low-accuracy LP solution produced by the FOM does not lead to a loss in quality of the FP heuristic solutions when compared to a high-accuracy interior-point method LP solution. Further, we use our FP framework to produce high-accuracy solutions for large-scale (up to 243 million non-zeros and 8 million decision variables) unit-commitment energy-system optimization models created with the modeling framework REMix. For the largest problems, we can generate solutions with under 2% primal-dual gap in less than 4 hours, whereas commercial solvers cannot generate feasible solutions within two days of runtime. This study represents the first successful application of FOMs in large-scale mixed-integer optimization, demonstrating their efficacy and establishing a foundation for future research in this domain.
Shifting towards renewable energy sources and reducing carbon emissions necessitate sophisticated energy system planning, optimization, and extension. Energy systems optimization models (ESOMs) often form the basis for political and operational decision-making. ESOMs are frequently formulated as linear (LPs) and mixed-integer linear (MIP) problems. MIPs allow continuous and discrete decision variables. Consequently, they are substantially more expressive than LPs but also more challenging to solve. The ever-growing size and complexity of ESOMs take a toll on the computational time of state-of-the-art commercial solvers. Indeed, for large-scale ESOMs, solving the LP relaxation -- the basis of modern MIP solution algorithms -- can be very costly. These time requirements can render ESOM MIPs impractical for real-world applications. This article considers a set of large-scale decarbonization-focused unit commitment models with expansion decisions based on the REMix framework (up to 83 million variables and 900,000 discrete decision variables). For these particular instances, the solution to the LP relaxation and the MIP optimum lie close. Based on this observation, we investigate the application of relaxation-enforced neighborhood search (RENS), machine learning guided rounding, and a fix-and-propagate (FP) heuristic as a standalone solution method. Our approach generated feasible solutions 20 to 100 times faster than GUROBI, achieving comparable solution quality with primal-dual gaps as low as 1% and up to 35%. This enabled us to solve numerous scenarios without lowering the quality of our models. For some instances that Gurobi could not solve within two days, our FP method provided feasible solutions in under one hour.
A Multi-Commodity Flow Heuristic for Integrated Periodic Timetabling for Railway Construction Sites
(2025)
Rescheduling a railway system comprises many aspects, such as line planning, timetabling, track allocation, and vehicle scheduling. For periodic timetables, these features can be integrated into a single mixed-integer program extending the Periodic Event Scheduling Problem (PESP) with a routing component. We develop a multi-commodity-flow-based heuristic that allows to compute better solutions faster than a black-box MIP approach on real construction site scenarios on the S-Bahn Berlin network.
How many mutually non-attacking queens can be placed on a d-dimensional chessboard of size n? The n-queens problem in higher dimensions is a generalization of the well-known n-queens problem. We present an integer programming formulation of the n-queens problem in higher dimensions and several strengthenings through additional valid inequalities. Compared to recent benchmarks, we achieve a speedup in computational time between 15–70x over all instances of the integer programs. Our computational results prove optimality of certificates for several large instances. Breaking additional, previously unsolved instances with the proposed methods is likely possible. On the primal side, we further discuss heuristic approaches to constructing solutions that turn out to be optimal when compared to the IP.
In practice, non-specialized interior point algorithms often cannot utilize the massively parallel compute resources offered by modern many- and multi-core compute platforms. However, efficient distributed solution techniques are required, especially for large-scale linear programs. This article describes a new decomposition technique for systems of linear equations implemented in the parallel interior-point solver PIPS-IPM++. The algorithm exploits a matrix structure commonly found in optimization problems: a doubly-bordered block-diagonal or arrowhead structure. This structure is preserved in the linear KKT systems solved during each iteration of the interior-point method. We present a hierarchical Schur complement decomposition that distributes and solves the linear optimization problem; it is designed for high-performance architectures and scales well with the availability of additional computing resources. The decomposition approach uses the border constraints’ locality to decouple the factorization process. Our approach is motivated by large-scale unit-commitment problems. We demonstrate the performance of our method on a set of mid-to large-scale instances, some of which have more than 10^9 nonzeros in their constraint matrix.