Refine
Year of publication
Document Type
- ZIB-Report (103)
- Article (72)
- In Proceedings (30)
- Book chapter (13)
- ZIB-Annual (4)
- Other (4)
- In Collection (3)
- Book (2)
- Doctoral Thesis (2)
- Report (2)
Keywords
- Mixed Integer Programming (10)
- Integer Programming (6)
- IP (5)
- branch-and-cut (5)
- linear programming (5)
- mixed integer programming (5)
- Open Access (4)
- UMTS (4)
- branch-and-bound (4)
- mixed-integer programming (4)
Institute
- Mathematical Optimization (120)
- Applied Algorithmic Intelligence Methods (67)
- Mathematical Optimization Methods (45)
- ZIB Allgemein (22)
- Mathematical Algorithmic Intelligence (18)
- Digital Data and Information for Society, Science, and Culture (10)
- Energy Network Optimization (7)
- AI in Society, Science, and Technology (6)
- KOBV (4)
この論文ではソフトウェア・パッケージSCIP Optimization Suite を紹介し,その3つの構成要素:モデリン
グ言語Zimpl, 線形計画(LP: linear programming) ソルバSoPlex, そして,制約整数計画(CIP: constraint
integer programming) に対するソフトウェア・フレームワークSCIP, について述べる.本論文では,この3つの
構成要素を利用して,どのようにして挑戦的な混合整数線形計画問題(MIP: mixed integer linear optimization
problems) や混合整数非線形計画問題(MINLP: mixed integer nonlinear optimization problems) をモデル化
し解くのかを説明する.SCIP は,現在,最も高速なMIP,MINLP ソルバの1つである.いくつかの例により,
Zimpl, SCIP, SoPlex の利用方法を示すとともに,利用可能なインタフェースの概要を示す.最後に,将来の開
発計画の概要について述べる.
This paper introduces the SCIP Optimization Suite and discusses the capabilities of its three components: the modeling language Zimpl, the linear programming solver SoPlex, and the constraint integer programming framework SCIP. We explain how these can be used in concert to model and solve challenging mixed integer linear and nonlinear optimization problems. SCIP is currently one of the fastest non-commercial MIP and MINLP solvers. We demonstrate the usage of Zimpl, SCIP, and SoPlex by selected examples, we give an overview of available interfaces, and outline plans for future development.
This paper describes how we solved 12 previously unsolved mixed-integer program- ming (MIP) instances from the MIPLIB benchmark sets. To achieve these results we used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper we describe the basic parallelization mechanism of ParaSCIP, improvements of the dynamic load balancing and novel techniques to exploit the power of parallelization for MIP solving. We give a detailed overview of computing times and statistics for solving open MIPLIB instances.
Introduction
(2015)
This paper describes three presolving techniques for solving mixed integer programming problems (MIPs) that were implemented in the academic MIP solver SCIP. The task of presolving is to reduce the problem size and strengthen the formulation, mainly by eliminating redundant information and exploiting problem structures. The first method fixes continuous singleton columns and extends results known from duality fixing. The second analyzes and exploits pairwise dominance relations between variables, whereas the third detects isolated subproblems and solves them independently. The performance of the presented techniques is demonstrated on two MIP test sets. One contains all benchmark instances from the last three MIPLIB versions, while the other consists of real-world supply chain management problems. The computational results show that the combination of all three presolving techniques almost halves the solving time for the considered supply chain management problems. For the MIPLIB instances we obtain a speedup of 20 % on affected instances while not degrading the performance on the remaining problems.
In Deutschland wurden 2011 wichtige Akzente für die Umsetzung des Grünen Wegs der Open-Access-Bewegung gesetzt: Mit finanzieller Unterstützung der Deutschen Forschungsgemeinschaft (DFG) haben Bibliotheken sogenannte Allianz-Lizenzen mit Verlagen verhandelt, in denen weitreichende Rechte hinsichtlich der Open-Access-Archivierung verankert sind. Autorinnen und Autoren zugriffsberechtigter Einrichtungen können ihre Artikel, die in diesen lizenzierten Zeitschriften erschienen sind, ohne oder mit nur kurzer Embargofrist in geeigneten Repositorien ihrer Wahl frei zugänglich machen.
Allerdings macht der Kreis berechtigter Autorinnen und Autoren nur sehr zögerlich von seinen Open-Access-Rechten Gebrauch. Auch die Bibliotheken – als Betreiber der Repositorien und damit Vertreter für die berechtigten Autorinnen und Autoren – nutzen dieses Recht nur unzureichend.
Mit DeepGreen verfolgen die Antragssteller das Ziel, einen Großteil jener Publikationen, die unter den speziell im DFG-geförderten Kontext verhandelten Bedingungen grün online gehen dürften, auch tatsächlich online abrufbar zu machen. Im Rahmen des Projektes wird prototypisch mit Allianzverlagen und berechtigten Bibliotheken ein möglichst stark automatisierter Workflow entwickelt, in dem rechtssichere Verlagsdaten inklusive der Volltexte abgeliefert und von Repositorien eingespielt werden. Ein technischer Baustein ist dabei ein intermediäres Repositorium, das als Datenverteiler dient.
Das nationale Projektkonsortium besteht aus den zwei Bibliotheksverbünden Kooperativer Bibliotheksverbund Berlin-Brandenburg (KOBV) und Bibliotheksverbund Bayern (BVB), den zwei Universitätsbibliotheken der Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) und der Technische Universität Berlin (TU Berlin), zusätzlich die Bayerische Staatsbibliothek (BSB) und eine außeruniversitäre Forschungseinrichtung - das Helmholtz Open Science Koordinationsbüro am Deutschen GeoForschungsZentrum (GFZ).
Das Projekt startet zum 01. Januar 2016. Hier vorliegend finden Sie den Projektantrag zum Nachlesen.
Im Oktober 2012 hatte die Deutsche Forschungsgemeinschaft (DFG) ein Förderprogramm zur Neuausrichtung überregionaler Informationsservices ausgeschrieben, um eine umfassende Reorganisation bestehender Infrastrukturen anzustoßen, die mit den Empfehlungen des Wissenschaftsrates zur Zukunft der bibliothekarischen Verbundsysteme in Deutschland gefordert wurde. Im Themenfeld „Bibliotheksdateninfrastruktur und Lokale Systeme“ der DFG-Ausschreibung wurde das vom Hessischen Bibliotheksinformationssystem (HeBIS), vom Bibliotheksverbund Bayern (BVB) und vom Kooperativen Bibliotheksverbund Berlin-Brandenburg (KOBV) beantragte Vorhaben „Cloudbasierte Infrastruktur für Bibliotheksdaten“ (CIB) bewilligt. Das Projekt zielt auf die Überführung bibliothekarischer Workflows und Dienste in cloudbasierte Arbeitsumgebungen und die sukzessive Ablösung traditioneller Verbund- und Lokalsysteme durch internationale Systemplattformen. Zu den Arbeitspaketen des Vorhabens gehört u. a. die Einbindung von Norm- und Fremddatenangeboten und von weiteren Services in diese Plattformen.
We propose an approach to solve the validation of nominations problem using mixed-integer nonlinear programming (MINLP) methods. Our approach handles both the discrete settings and the nonlinear aspects of gas physics. Our main contribution is an innovative coupling of mixed-integer (linear) programming (MILP) methods with nonlinear programming (NLP) that exploits the special structure of a suitable approximation of gas physics, resulting in a global optimization method for this type of problem.
Perspectives
(2015)
One quarter of Europe's energy demand is provided by natural gas distributed through a vast pipeline network covering the whole of Europe. At a cost of 1 million Euro per km extending the European pipeline network is already a multi-billion Euro business. Therefore, automatic planning tools that support the decision process are desired. Unfortunately, current mathematical methods are not capable of solving the arising network design problems due to their size and complexity. In this article, we will show how to apply optimization methods that can converge to a proven global optimal solution. By introducing a new class of valid inequalities that improve the relaxation of our mixed-integer nonlinear programming model, we are able to speed up the necessary computations substantially.
SCIP-JACK is a customized, branch-and-cut based solver for Steiner tree and related problems. ug [SCIP-JACK, MPI] extends SCIP-JACK to a massively parallel solver by using the Ubiquity Generator (UG) framework. ug [SCIP-JACK, MPI] was the only solver that could run on a distributed environment at the (latest) 11th DIMACS Challenge in 2014. Furthermore, it could solve three well-known open instances and updated 14 best-known solutions to instances from the benchmark libary STEINLIB. After the DIMACS Challenge, SCIP-JACK has been considerably improved. However, the improvements were not reflected on ug [SCIP- JACK, MPI]. This paper describes an updated version of ug [SCIP-JACK, MPI], especially branching on constrains and a customized racing ramp-up. Furthermore, the different stages of the solution process on a supercomputer are described in detail. We also show the latest results on open instances from the STEINLIB.
A massively parallel interior-point solver for linear energy system models with block structure
(2019)
Linear energy system models are often a crucial component of system
design and operations, as well as energy policy consulting. Such models can lead to large-scale linear programs, which can be intractable even for state-of-the-art commercial solvers|already the available memory on a desktop machine might not be sufficient. Against this backdrop, this article introduces an interior-point solver that exploits common structures of
linear energy system models to efficiently run in parallel on distributed memory systems. The solver is designed for linear programs with doubly bordered
block-diagonal constraint matrix and makes use of a Schur complement
based decomposition. Special effort has been put into handling
large numbers of linking constraints and variables as commonly observed
in energy system models. In order to handle this strong linkage, a distributed
preconditioning of the Schur complement is used. In addition, the
solver features a number of more generic techniques such as parallel matrix
scaling and structure-preserving presolving. The implementation is
based on the existing parallel interior-point solver PIPS-IPM. We evaluate
the computational performance on energy system models with up to 700
million non-zero entries in the constraint matrix, and with more than 200
million columns and 250 million rows. This article mainly concentrates on
the energy system model ELMOD, which is a linear optimization model
representing the European electricity markets by the use of a nodal pricing
market clearing. It has been widely applied in the literature on energy
system analyses during the recent years. However, it will be demonstrated
that the new solver is also applicable to other energy system models.
In linear optimization, matrix structure can often be exploited algorithmically. However, beneficial presolving reductions sometimes destroy the special structure of a given problem. In this article, we discuss structure-aware implementations of presolving as part of a parallel interior-point method to solve linear programs with block-diagonal structure, including both linking variables and linking constraints. While presolving reductions are often mathematically simple, their implementation in a high-performance computing environment is a complex endeavor. We report results on impact, performance, and scalability of the resulting presolving routines on real-world energy system models with up to 700 million nonzero entries in the constraint matrix.
In linear optimization, matrix structure can often be exploited algorithmically. However, beneficial presolving reductions sometimes destroy the special structure of a given problem. In this article, we discuss structure-aware implementations of presolving as part of a parallel interior-point method to solve linear programs with block-diagonal structure, including both linking variables and linking constraints. While presolving reductions are often mathematically simple, their implementation in a high-performance computing environment is a complex endeavor. We report results on impact, performance, and scalability of the resulting presolving routines on real-world energy system models with up to 700 million nonzero entries in the constraint matrix.
KOBV Jahresbericht 2013-2014
(2015)
We report on the selection process leading to the sixth version of the Mixed Integer Programming Library. Selected from an initial pool of over 5,000 instances, the new MIPLIB 2017 collection consists of 1,065 instances. A subset of 240 instances was specially selected for benchmarking solver performance. For the first time, the compilation of these sets was done using a data-driven selection process supported by the solution of a sequence of mixed integer optimization problems, which encoded requirements on diversity and balancedness with respect to instance features and performance data.
The modeling flexibility and the optimality guarantees provided by mixed-integer programming greatly aid the design of robust and future-proof decision support systems. The complexity of industrial-scale supply chain optimization, however, often poses limits to the application of general mixed-integer programming solvers. In this paper we describe algorithmic innovations that help to ensure that MIP solver performance matches the complexity of the large supply chain problems and tight time limits encountered in practice. Our computational evaluation is based on a diverse set, modeling real-world scenarios supplied by our industry partner SAP.
The German high-pressure natural gas transport network consists of thousands of interconnected elements spread over more than 120,000 km of pipelines built during the last 100 years. During the last decade, we have spent many person-years to extract consistent data out of the available sources, both public and private. Based on two case studies, we present some of the challenges we encountered.
Preparing consistent, high-quality data is surprisingly hard, and the effort necessary can hardly be overestimated. Thus, it is particularly important to decide which strategy regarding data curation to adopt. Which precision of the data is necessary? When is it more efficient to work with data that is just sufficiently correct on average?
In the case studies we describe our experiences and the strategies we adopted to deal with the obstacles and to minimize future effort.
Finally, we would like to emphasize that well-compiled data sets, publicly available for research purposes, provide the grounds for building innovative algorithmic solutions to the challenges of the future.
In this article we introduce a Minimum Cycle Partition Problem with Length Requirements (CPLR). This generalization of the Travelling Salesman Problem (TSP) originates from routing Unmanned Aerial Vehicles (UAVs). Apart from nonnegative edge weights, CPLR has an individual critical weight value associated with each vertex. A cycle partition, i.e., a vertex disjoint cycle cover, is regarded as a feasible solution if the length of each cycle, which is the sum of the weights of its edges, is not greater than the critical weight of each of its vertices. The goal is to find a feasible partition, which minimizes the number of cycles. In this article, a heuristic algorithm is presented together with a Mixed Integer Programming (MIP) formulation of CPLR. We furthermore introduce a conflict graph, whose cliques yield valid constraints for the MIP model. Finally, we report on computational experiments conducted on TSPLIB-based test instances.
In this paper, we describe an algorithmic framework for the optimal operation of transient gas transport networks consisting of a hierarchical MILP formulation together with a sequential linear programming inspired post-processing routine. Its implementation is part of the KOMPASS decision support system, which is currently used in an industrial setting.
Real-world gas transport networks are controlled by operating complex pipeline intersection areas, which comprise multiple compressor units, regulators, and valves. In the following, we introduce the concept of network stations to model them. Thereby, we represent the technical capabilities of a station by hand-tailored artificial arcs and add them to network. Furthermore, we choose from a predefined set of flow directions for each network station and time step, which determines where the gas enters and leaves the station. Additionally, we have to select a supported simple state, which consists of two subsets of artificial arcs: Arcs that must and arcs that cannot be used. The goal is to determine a stable control of the network satisfying all supplies and demands.
The pipeline intersections, that are represented by the network stations, were initially built centuries ago. Subsequently, due to updates, changes, and extensions, they evolved into highly complex and involved topologies. To extract their basic properties and to model them using computer-readable and optimizable descriptions took several years of effort.
To support the dispatchers in controlling the network, we need to compute a continuously updated list of recommended measures. Our motivation for the model presented here is to make fast decisions on important transient global control parameters, i.e., how to route the flow and where to compress the gas. Detailed continuous and discrete technical control measures realizing them, which take all hardware details into account, are determined in a subsequent step.
In this paper, we present computational results from the KOMPASS project using detailed real-world data.
This study examines the usability of a real-world, large-scale natural gas transport infrastructure for hydrogen transport. We investigate whether a converted network can transport the amounts of hydrogen necessary to satisfy current energy demands. After introducing an optimization model for the robust transient control of hydrogen networks, we conduct computational experiments based on real-world demand scenarios. Using a representative network, we demonstrate that replacing each turbo compressor unit by four parallel hydrogen compressors, each of them comprising multiple serial compression stages, and imposing stricter rules regarding the balancing of in- and outflow suffices to realize transport in a majority of scenarios. However, due to the reduced linepack there is an increased need for technical and non-technical measures leading to a more dynamic network control. Furthermore, the amount of energy needed for compression increases by 364% on average.
In this article, we discuss the Length-Constrained Cycle Partition Problem (LCCP). Besides edge weights, the undirected graph in LCCP features an individual critical weight value for each vertex. A cycle partition, i.e., a vertex disjoint cycle cover, is a feasible solution if the length of each cycle is not greater than the critical weight of each of the vertices in the cycle. The goal is to find a feasible partition with the minimum number of cycles. In this article, we discuss theoretical properties, preprocessing techniques, and two mixed-integer programming models (MIP) for LCCP both inspired by formulations for the closely related Travelling Salesperson Problem (TSP). Further, we introduce conflict hypergraphs, whose cliques yield valid constraints for the MIP models.
We conclude with a report on computational experiments conducted on (A)TSPLIB-based instances. As an example, we use a routing problem in which a fleet of uncrewed aerial vehicles (UAVs) patrols a set of areas.
Für den geforderten – und von der Deutschen Forschungsgemeinschaft (DFG) geförderten – Open-Access-Transformationsprozess der deutschen, wissenschaftlichen Publikationslandschaft braucht es neue Formen der Zusammenarbeit zwischen Wissenschaft und Verlagen. Bereits seit 2011 wurden mit Unterstützung seitens der DFG in Deutschland die sogenannten Allianz-Lizenzen zwischen Bibliotheken und Verlagen verhandelt, in denen weitreichende Rechte hinsichtlich der Open-Access-Archivierung verankert sind: Autorinnen und Autoren aber auch die sie vertretenden Einrichtungen dürfen Artikel, die in lizenzierten Zeitschriften erschienen sind, ohne oder mit nur kurzer Embargofrist in geeigneten Repositorien ihrer Wahl frei zugänglich machen. Aufbauend auf diese Open-Access-Komponenten zeigt das DFG-geförderte Projekt „DeepGreen“ ein mögliches neues Modell der Zusammenarbeit mit Verlagen auf: DeepGreen setzt auf die automatisierte Verteilung von Artikeldaten von Verlagen an Repositorien und will disziplinübergreifend einen Großteil jener wissenschaftlichen Publikationen aus Fachzeitschriften, die unter lizenzrechtlichen Kontexten frei zugänglich online gehen dürften, auch tatsächlich online abrufbar machen.
Erprobte DeepGreen von 2016 bis Ende 2017 prototypisch die Machbarkeit der Zielstellung, will das Projekt in der zweiten Projektphase (2018-2020) den (möglichst stark) automatisierten Workflow gemeinsam mit Verlagen, berechtigten Bibliotheken und anderen Einrichtungen etablieren. Technischer Baustein ist eine zentrale, intermediäre Datenverteilstation, die die automatische und rechtssichere Ablieferung von Metadaten inklusive der Volltexte aus Verlagshand direkt an dazu berechtigte institutionelle Repositorien gewährleistet. Erreicht werden soll ein bundesweiter Service, der auf verbindlichen Absprachen mit Verlagen und Bibliotheken fußt und (zunächst) die Bedingungen der Allianz-Lizenzen umsetzt. Gleichzeitig wird die Übertragbarkeit des DeepGreen-Ansatzes auf weitere Lizenzkontexte (FID-Lizenzen, Konsortiallizenzen, Gold-Open-Access-Vereinbarungen) geprüft. Eine zusätzliche Ausbaustufe stellt die Überlegung zur automatisierten Ablieferung an Fachrepositorien und Forschungsinformationssysteme dar, die ebenfalls geplant wird.
Das nationale Projektkonsortium besteht aus den zwei Bibliotheksverbünden Kooperativer Bibliotheksverbund Berlin-Brandenburg (KOBV) und Bibliotheksverbund Bayern (BVB), zwei Universitätsbibliotheken, der Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) und der Technische Universität Berlin (TU Berlin), zusätzlich der Bayerischen Staatsbibliothek (BSB) und einer außeruniversitären Forschungseinrichtung - dem Helmholtz Open Science Koordinationsbüro am Deutschen GeoForschungsZentrum (GFZ).
Das Folgeprojekt beginnt am 01. August 2018. Hier vorliegend finden Sie den Projektantrag zum Nachlesen.
In 2005 the European Union liberalized the gas market with a disruptive change and decoupled trading of natural gas from its transport. The gas is now transported by independent so-called transmissions system operators or TSOs. The market model established by the European Union views the gas transmission network as a black box, providing shippers (gas traders and consumers) the opportunity to transport gas from any entry to any exit. TSOs are required to offer the maximum possible capacities at each entry and exit such that any resulting gas flow can be realized by the network. The revenue from selling these capacities more than one billion Euro in Germany alone, but overestimating the capacity might compromise the security of supply. Therefore, evaluating the available transport capacities is extremely important to the TSOs.
This is a report on a large project in mathematical optimization, set out to develop a new toolset for evaluating gas network capacities. The goals and the challenges as they occurred in the project are described, as well as the developments and design decisions taken to meet the requirements.
Current linear energy system models (ESM) acquiring to provide sufficient detail and reliability frequently bring along problems of both high intricacy and increasing scale. Unfortunately, the size and complexity of these problems often prove to be intractable even for commercial state-of-the-art linear programming solvers. This article describes an interdisciplinary approach to exploit the intrinsic structure of these large-scale linear problems to be able to solve them on massively parallel high-performance computers. A key aspect are extensions
to the parallel interior-point solver PIPS-IPM originally developed for stochastic optimization problems. Furthermore, a newly developed GAMS interface to the solver as well as some GAMS language extensions to model block-structured problems will be described.
Borne out of a surprising variety of practical applications, the maximum-weight connected subgraph problem has attracted considerable interest during the past years. This interest has not only led to notable research on theoretical properties, but has also brought about several (exact) solvers-with steadily increasing performance. Continuing along this path, the following article introduces several new algorithms such as reduction techniques and heuristics and describes their integration into an exact solver. The new methods are evaluated with respect to both their theoretical and practical properties. Notably, the new exact framework allows to solve common problem instances from the literature faster than all previous approaches. Moreover, one large-scale benchmark instance from the 11th DIMACS Challenge can be solved for the first time to optimality and the primal-dual gap for two other ones can be significantly reduced.
In 2005 the European Union liberalized the gas market with a disruptive change
and decoupled trading of natural gas from its transport. The gas is now trans-
ported by independent so-called transmissions system operators or TSOs. The
market model established by the European Union views the gas transmission
network as a black box, providing shippers (gas traders and consumers) the
opportunity to transport gas from any entry to any exit. TSOs are required
to offer the maximum possible capacities at each entry and exit such that any
resulting gas flow can be realized by the network. The revenue from selling these
capacities more than one billion Euro in Germany alone, but overestimating the
capacity might compromise the security of supply. Therefore, evaluating the
available transport capacities is extremely important to the TSOs.
This is a report on a large project in mathematical optimization, set out
to develop a new toolset for evaluating gas network capacities. The goals and
the challenges as they occurred in the project are described, as well as the
developments and design decisions taken to meet the requirements.
The Steiner tree problem in graphs is a classical problem that commonly arises in practical applications as one of many variants. Although the different Steiner tree problem variants are usually strongly related, solution approaches employed so far have been prevalently problem-specific. Against this backdrop, the solver SCIP-Jack was created as a general-purpose framework that can be used to solve the classical Steiner tree problem and 11 of its variants. This versatility is achieved by transforming various problem variants into a general form and solving them by using a state-of-the-art MIP-framework. Furthermore, SCIP-Jack includes various newly developed algorithmic components such as preprocessing routines and heuristics. The result is a high-performance solver that can be employed in massively parallel environments and is capable of solving previously unsolved instances. After the introduction of SCIP-Jack at the 2014 DIMACS Challenge on Steiner problems, the overall performance of the solver has considerably improved. This article provides an overview on the current state.
MILP. Try. Repeat.
(2021)
Cut selection is a subroutine used in all modern mixed-integer linear programming solvers with the goal of selecting a subset of generated cuts that induce optimal solver performance. These solvers have millions of parameter combinations, and so are excellent candidates for parameter tuning. Cut selection scoring rules are usually weighted sums of different measurements, where the weights are parameters. We present a parametric family of mixed-integer linear programs together with infinitely many family-wide valid cuts. Some of these cuts can induce integer optimal solutions directly after being applied, while others fail to do so even if an infinite amount are applied. We show for a specific cut selection rule, that any finite grid search of the parameter space will always miss all parameter values, which select integer optimal inducing cuts in an infinite amount of our problems. We propose a variation on the design of existing graph convolutional neural networks, adapting them to learn cut selection rule parameters. We present a reinforcement learning framework for selecting cuts, and train our design using said framework over MIPLIB 2017. Our framework and design show that adaptive cut selection does substantially improve performance over a diverse set of instances, but that finding a single function describing such a rule is difficult. Code for reproducing all experiments is available at https://github.com/Opt-Mucca/Adaptive-Cutsel-MILP.
About 23% of the German energy demand is supplied by natural gas. Additionally, for about the same amount Germany serves as a transit country. Thereby, the German network represents a central hub in the European natural gas transport network. The transport infrastructure is operated by transmissions system operators (TSOs). The number one priority of the TSOs is to ensure the security of supply. However, the TSOs have only very limited knowledge about the intentions and planned actions of the shippers (traders). Open Grid Europe (OGE), one of Germany’s largest TSO, operates a high-pressure transport network of about 12,000 km length. With the introduction of peak-load gas power stations, it is of great importance to predict in- and out-flow of the network to ensure the necessary flexibility and security of supply for the German Energy Transition (“Energiewende”). In this paper, we introduce a novel hybrid forecast method applied to gas flows at the boundary nodes of a transport network. This method employs an optimized feature selection and minimization. We use a combination of a FAR, LSTM and mathematical programming to achieve robust high-quality forecasts on real-world data for different types of network nodes.
DeepGreen wurde vom 01.08.2018 bis zum 30.06.2021 in einer zweiten Projektphase von der Deutschen Forschungsgemeinschaft (DFG) gefördert. DeepGreen unterstützt Bibliotheken als Dienstleister für Hochschulen, außeruniversitäre Forschungseinrichtungen und die dort tätigen Wissenschaftler:innen dabei, Publikationen auf Open-Access-Repositorien frei zugänglich zu machen und fördert das Zusammenspiel von wissenschaftlichen Einrichtungen und Verlagen. An der zweiten Projektphase waren der Kooperative Bibliotheksverbund Berlin-Brandenburg, die Bayerische Staatsbibliothek, der Bibliotheksverbund Bayern, die Universitätsbibliotheken der Friedrich-Alexander-Universität Erlangen-Nürnberg und der Technischen Universität Berlin und das Helmholtz Open Science Office beteiligt. In dem Projekt wurde erfolgreich eine technische und organisatorische Lösung zur automatisierten Verteilung von Artikeldaten wissenschaftlicher Verlage an institutionelle und fachliche Repositorien entwickelt. In der zweiten Projektphase lag der Fokus auf der Erprobung der Datendrehscheibe in der Praxis und der Ausweitung auf weitere Datenabnehmer und weitere Verlage. Im Anschluss an die DFG-geförderte Projektlaufzeit ist DeepGreen in einen zweijährigen Pilotbetrieb übergegangen. Ziel des Pilotbetriebs ist es, den Übergang in einen bundesweiten Real-Betrieb vorzubereiten.
This study investigates the progress made in LP and MILP solver performance during the last two decades by comparing the solver software from the beginning of the millennium with the codes available today.
On average, we found out that for solving LP/MILP, computer hardware got about 20 times faster, and the algorithms improved by a factor of about nine for LP and around 50 for MILP, which gives a total speed-up of about 180 and 1,000 times, respectively.
However, these numbers have a very high variance and they considerably underestimate the progress made on the algorithmic side: many problem instances can nowadays be solved within seconds, which the old codes are not able to solve within any reasonable time.
In this article we investigate methods to solve a fundamental task in gas transportation, namely the validation of nomination problem: Given a gas transmission network consisting of passive pipelines and active, controllable elements and given an amount of gas at every entry and exit point of the network, find operational settings for all active elements such that there exists a network state meeting all physical, technical, and legal constraints.
We describe a two-stage approach to solve the resulting complex and numerically difficult mixed-integer non-convex nonlinear feasibility problem. The first phase consists of four distinct algorithms facilitating mixed-integer linear, mixed-integer nonlinear, reduced nonlinear, and complementarity constrained methods to compute possible settings for the discrete decisions. The second phase employs a precise continuous nonlinear programming model of the gas network. Using this setup, we are able to compute high quality solutions to real-world industrial instances whose size is significantly larger than networks that have appeared in the literature previously.
Since 2005, the gas market in the European Union is liberalized and the trading of natural gas is decoupled from its transport. The transport is done by so-called transmissions system operators or TSOs. The market model established by the European Union views the gas transmission network as a black box, providing shippers (gas traders and consumers) the opportunity to transport gas from any entry to any exit. TSOs are required to offer maximum independent capacities at each entry and exit such that the resulting gas flows can be realized by the network without compromising security of supply. Therefore, evaluating the available transport capacities is extremely important to the TSOs.
This paper gives an overview of the toolset for evaluating gas network capacities that has been developed within the ForNe project, a joint research project of seven research partners initiated by Open Grid Europe, Germany's biggest TSO. While most of the relevant mathematics is described in the book "Evaluating Gas Network Capacities", this article sketches the system as a whole, describes some developments that have taken place recently, and gives some details about the current implementation.
Contemporary supercomputers can easily provide years of
CPU time per wall-clock hour. One challenge of today's software
development is how to harness this wast computing power in order to solve
really hard mixed integer programming instances. In 2010, two out of
six open MIPLIB2003 instances could be solved by ParaSCIP in more than
ten consecutive runs, restarting from checkpointing files.
The contribution of this paper is threefold:
For the first time, we present computational results of single runs for
those two instances. Secondly, we provide new improved upper and lower
bounds for all of the remaining four open MIPLIB2003 instances.
Finally, we explain which new developments led to these results and
discuss the current progress of ParaSCIP. Experiments were conducted on
HLRNII, on HLRN III, and on the Titan supercomputer, using up to 35,200 cores.
Mixed-integer programming (MIP) problem is arguably among the hardest classes of optimization problems. This paper describes how we solved 21 previously unsolved MIP instances from the MIPLIB benchmark sets. To achieve these results we used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper, we describe the basic parallelization mechanism of ParaSCIP, improvements of the dynamic load balancing and novel techniques to exploit the power of parallelization for MIP solving. We give a detailed overview of computing times and statistics for solving open MIPLIB instances.
We present an exact rational solver for mixed-integer linear programming
that avoids the numerical inaccuracies inherent in the floating-point
computations used by existing software. This allows the solver to be used
for establishing theoretical results and in applications where correct
solutions are critical due to legal and financial consequences. Our solver
is a hybrid symbolic/numeric implementation of LP-based branch-and-bound,
using numerically-safe methods for all binding computations in the search
tree. Computing provably accurate solutions by dynamically choosing the
fastest of several safe dual bounding methods depending on the structure of
the instance, our exact solver is only moderately slower than an inexact
floating-point branch-and-bound solver. The software is incorporated into
the SCIP optimization framework, using the exact LP solver QSopt_ex and the
GMP arithmetic library. Computational results are presented for a suite of
test instances taken from the MIPLIB and Mittelmann collections.
Die mittel- und längerfristige Planung für den Gastransport hat sich durch
Änderungen in den regulatorischen Rahmenbedingungen stark verkompliziert.
Kernpunkt ist die Trennung von Gashandel und -transport. Dieser Artikel
diskutiert die hieraus resultierenden mathematischen Planungsprobleme,
welche als Validierung von Nominierungen und Buchungen, Bestimmung der
technischen Kapazität und Topologieplanung bezeichnet werden. Diese
mathematischen Optimierungsprobleme werden vorgestellt und Lösungsansätze
skizziert.
In this article we describe the impact from embedding a 15 year old model for solving the Steiner tree problem in graphs in a state-of-the-art MIP-Framework, making the result run in a massively parallel environment and extending the model to solve as many variants as possible. We end up with a high-perfomance solver that is capable of solving previously unsolved instances and, in contrast to its predecessor, is freely available for academic research.
Presolving attempts to eliminate redundant information from the problem formulation and simultaneously tries to strengthen the formulation. It can be very effective and is often essential for solving instances. Especially for mixed integer programming problems, fast and effective presolving algorithms are very important. In this paper, we report on three new presolving techniques. The first method searches for singleton continuous columns and tries to fix the corresponding variables. Then we present a presolving technique which exploits a partial order of the variables to induce fixings. Finally, we show an approach based on connected components in graphs. Our computational results confirm the profitable use of the algorithms in practice.
Modern MIP solvers employ dozens of auxiliary algorithmic components to support the branch-and-bound search in finding and improving primal solutions and in strengthening the dual bound.
Typically, all components are tuned to minimize the average running time to prove optimality. In this article, we take a different look at the run of a MIP solver. We argue that the solution process consists of three different phases, namely achieving feasibility, improving the incumbent solution, and proving optimality. We first show that the entire solving process can be improved by adapting the search strategy with respect to the phase-specific aims using different control tunings. Afterwards, we provide criteria to predict the transition between the individual phases and evaluate the performance impact of altering the algorithmic behavior of the MIP solver SCIP at the predicted phase transition points.
This paper describes how we solved 12 previously unsolved mixed-integer program-
ming (MIP) instances from the MIPLIB benchmark sets. To achieve these results we
used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP
computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper
we describe the basic parallelization mechanism of ParaSCIP, improvements of the
dynamic load balancing and novel techniques to exploit the power of parallelization
for MIP solving. We give a detailed overview of computing times and statistics for
solving open MIPLIB instances.
SCIP-JACK is a customized, branch-and-cut based solver for Steiner tree and related problems. ug [SCIP-JACK, MPI] extends SCIP-JACK to a massively par- allel solver by using the Ubiquity Generator (UG) framework. ug [SCIP-JACK, MPI] was the only solver that could run on a distributed environment at the (latest) 11th DIMACS Challenge in 2014. Furthermore, it could solve three well-known open instances and updated 14 best known solutions to instances from the bench- mark libary STEINLIB. After the DIMACS Challenge, SCIP-JACK has been con- siderably improved. However, the improvements were not reflected on ug [SCIP- JACK, MPI]. This paper describes an updated version of ug [SCIP-JACK, MPI], especially branching on constrains and a customized racing ramp-up. Furthermore, the different stages of the solution process on a supercomputer are described in detail. We also show the latest results on open instances from the STEINLIB.
Current linear energy system models (ESM) acquiring to provide sufficient detail and reliability frequently bring along problems of both high intricacy and increasing scale. Unfortunately, the size and complexity of these problems often prove to be intractable even for commercial state-of-the-art linear programming solvers. This article describes an interdisciplinary approach to exploit the intrinsic structure of these large-scale linear problems to be able to solve them on massively parallel high-performance computers. A key aspect are extensions to the parallel interior-point solver PIPS-IPM originally developed for stochastic optimization problems. Furthermore, a newly developed GAMS interface to the solver as well as some GAMS language extensions to model block-structured problems will be described.
In the transition towards a pure hydrogen infrastructure, repurposing the existing natural gas infrastructure is considered. In this study, the maximal technically feasible injection of hydrogen into the existing German natural gas transmission network is analysed with respect to regulatory limits regarding the gas quality. We propose a transient tracking model based on the general pooling problem including linepack. The analysis is conducted using real-world hourly gas flow data on a network of about 10,000 km length.
This article discusses the Length-Constrained Cycle Partition Problem (LCCP), which constitutes a new generalization of the Travelling Salesperson Problem (TSP). Apart from nonnegative edge weights, the undirected graph in LCCP features a nonnegative critical length parameter for each vertex. A cycle partition, i.e., a vertex-disjoint cycle cover, is a feasible solution for LCCP if the length of each cycle is not greater than the critical length of each vertex contained in it. The goal is to find a feasible partition having a minimum number of cycles. Besides analyzing theoretical properties and developing preprocessing techniques, we propose an elaborate heuristic algorithm that produces solutions of good quality even for large-size instances. Moreover, we present two exact mixed-integer programming formulations (MIPs) for LCCP, which are inspired by well-known modeling approaches for TSP. Further, we introduce the concept of conflict hypergraphs, whose cliques yield valid constraints for the MIP models. We conclude with a discussion on computational experiments that we conducted using (A)TSPLIB-based problem instances. As a motivating example application, we describe a routing problem where a fleet of uncrewed aerial vehicles (UAVs) must patrol a given set of areas.
The Steiner tree problem in graphs is a classical problem that commonly arises in practical applications as one of many variants. While often a strong relationship between different Steiner tree problem variants can be observed, solution approaches employed so far have been prevalently problem specific. In contrast, this paper introduces a general purpose solver that can be used to solve both the classical Steiner tree problem and many of its variants without modification. This is achieved by transforming various problem variants into a general form and solving them using a state-of-the-art MIP-framework. The result is a high-performance solver that can be employed in massively parallel environments and is capable of solving previously unsolved instances.
In 2011, important priorities were set to realize green publications in the open access movement in Germany. With financial support from the German Research Foundation (DFG), libraries negotiated Alliance licenses with publishers that guarantee extensive open access rights. Authors of institutions, that have therewith access to licensed journals, can freely publish their articles immediately or after a short embargo period in a repository of their choice. However, authors hesitantly use these open access rights. Also libraries – as managers of institutional and subject based repositories and thus legitimated representatives for the authors – only rarely make use of these rights. The aim of DeepGreen is to make the majority of those publications available online. Together with publishers of the Alliance licenses, the project consortium wants to develop a nearly fully automated workflow that covers both the delivery of data, including the full texts, of the publishers, as well as the data transformation to the necessary import formats and the loading process into the repositories. An intermediate “publication router” will serve as a distribution platform. The DeepGreen metadata schema contains metadata properties describing a wide range of deliverable bibliographic metadata from the Alliance license publishers (most common standards are JATS and CrossRef XML) as well as its compliance with technical, quality and metadata standards of the repositories. The schema includes required metadata elements and optional properties providing additional information.The metadata schema is aligned to the OCLC repository best practices (“Best Practices for CONTENTdm and other OAI-PMH compliant repositories: creating sharable metadata”, URL: http://www.oclc.org/content/dam/support/wcdigitalcollectiongateway/MetadataBestPractices.pdf). The current version of the schema is subject to changes as the functional requirements and workflow practices are evolving during the project experiences and prototype production.
KOBV Jahresbericht 2015-2016
(2017)
Mit dem DFG-geförderten Projekt DeepGreen soll eine automatisierte, rechtssichere Lösung entwickelt werden, um Artikeldaten von wissenschaftlichen Verlagen abzuholen und anschließend, nach Ablauf etwaiger lizenzrechtlicher Embargofristen, an berechtigte Repositorien zu verteilen und somit in den Open Access zu überführen. Dabei liegt der Fokus auf den DFG-geförderten und überregional verhandelten Allianz-Lizenzen, mit spezieller Open-Access-Komponente.
Die vorliegende Handreichung richtet sich speziell an die Betreiber institutioneller Repositorien und stellt Empfehlungen und einen Workflow bereit, um eine erfolgreiche Veröffentlichung, der durch die DeepGreen-Datendrehscheibe zugeordneten Artikel, zu ermöglichen. Die Handreichung basiert auf den bisherigen Erfahrungen der DeepGreen-Projektbeteiligten.
Questionnaire for effective exchange of bibliographic metadata – current status of publishing houses
(2016)
The project DeepGreen aims to realise better usage of green open access publication rights with regard to Alliance Licenses in Germany (https://www.nationallizenzen.de/open-access). Together with publishers who offer Alliance Licenses and authorized libraries, the project group intends to develop a prototype of a nearly fully automated workflow that covers the delivery of data from the publishers, including the article full texts, as well as the process of loading data into the institutional repositories of licensees. Further information about the project can be found within ZIB-Report 15-58, urn:nbn:de:0297-zib-56799.
In order to become acquainted with publishers’ processes for exchanging documents and metadata, the project group developed a questionnaire for an online survey. The publishing of this questionnaire is intended to demonstrate relevant aspects of the issue (e. g. methods of data exchange, protocols and interfaces) and to foster reuse of valuable questionnaire elements. The XML-file can be reused as a template, the PDF-file reproduces the original survey layout.
While energy-intensive industries like the steel industry plan to switch to renewable energy sources, other industries, such as the cement industry, have to rely on carbon capture storage and utilization technologies to reduce the inevitable carbon dioxide (CO2) emissions of their production processes. In this context, we investigate the problem of finding optimal pipeline diameters from a discrete set of diameters for a tree-shaped network transporting captured CO2 from multiple sources to a single sink.
The general problem of optimizing arc capacities in potential-based fluid networks is a challenging mixed-integer nonlinear program. Additionally, the behaviour of CO2 is highly sensitive and nonlinear regarding temperature and pressure changes. We propose an iterative algorithm splitting the problem into two parts: a) the pipe-sizing problem under a fixed supply scenario and temperature distribution and b) the thermophysical modelling including mixing effects, the Joule-Thomson effect, and heat exchange with the surrounding environment. We show the effectiveness of our approach by applying our algorithm to a real-world network planning problem for a CO2 network in Western Germany.
This study investigates the progress made in lp and milp solver performance during the last two decades by comparing the solver software from the beginning of the millennium with the codes available today. On average, we found out that for solving lp/milp, computer hardware got about 20 times faster, and the algorithms improved by a factor of about nine for lp and around 50 for milp, which gives a total speed-up of about 180 and 1,000 times, respectively. However, these numbers have a very high variance and they considerably underestimate the progress made on the algorithmic side: many problem instances can nowadays be solved within seconds, which the old codes are not able to solve within any reasonable time.
We propose a partially functional autoregressive model with exogenous variables
(pFAR) to describe the dynamic evolution of the serially correlated functional data.
It provides a united framework to model both the temporal dependence on multiple lagged functional covariates and the causal relation with ultrahigh-dimensional exogenous scalar covariates. Estimation is conducted under a two-layer sparsity assumption, where only a few groups and elements are supposed to be active, yet without knowing their number and location in advance. We establish asymptotic properties of the estimator and investigate its unite sample performance along with simulation studies. We demonstrate the application of pFAR with the high-resolution natural gas flows in Germany, where the pFAR model provides insightful interpretation as well as good out-of-sample forecast accuracy.
Article’s scientific prestige: Measuring the impact of individual articles in the web of science
(2023)
We performed a citation analysis on the Web of Science publications consisting of more than 63 million articles and 1.45 billion citations on 254 subjects from 1981 to 2020. We proposed the Article’s Scientific Prestige (ASP) metric and compared this metric to number of citations (#Cit) and journal grade in measuring the scientific impact of individual articles in the large-scale hierarchical and multi-disciplined citation network. In contrast to #Cit, ASP, that is computed based on the eigenvector centrality, considers both direct and indirect citations, and provides steady-state evaluation cross different disciplines. We found that ASP and #Cit are not aligned for most articles, with a growing mismatch amongst the less cited articles. While both metrics are reliable for evaluating the prestige of articles such as Nobel Prize winning articles, ASP tends to provide more persuasive rankings than #Cit when the articles are not highly cited. The journal grade, that is eventually determined by a few highly cited articles, is unable to properly reflect the scientific impact of individual articles. The number of references and coauthors are less relevant to scientific impact, but subjects do make a difference.
This paper introduces an implementation for solving the single-source shortest path problem on distributed-memory machines. It is tailored to power-law graphs and scales to trillions of edges.
The new implementation reached 2nd and 10th place in the latest Graph500 benchmark in June 2022 and handled the largest and second-largest graphs among all participants.
The Steiner tree problem in graphs is a classical problem that commonly arises in practical applications as one of many variants. While often a strong relationship between different
Steiner tree problem variants can be observed, solution approaches employed so far have been
prevalently problem-specific. In contrast, this paper introduces a general-purpose solver that
can be used to solve both the classical Steiner tree problem and many of its variants without
modification. This versatility is achieved by transforming various problem variants into a
general form and solving them by using a state-of-the-art MIP-framework. The result is
a high-performance solver that can be employed in massively parallel environments and is
capable of solving previously unsolved instances.
The concept of reduction has frequently distinguished itself as a pivotal ingredient of exact solving approaches for the Steiner tree problem in graphs. In this paper we broaden the focus and consider reduction techniques for three Steiner problem variants that have been extensively discussed in the literature and entail various practical applications: The prize-collecting Steiner tree problem, the rooted prize-collecting Steiner tree problem and the maximum-weight connected subgraph problem.
By introducing and subsequently deploying numerous new reduction methods, we are able to drastically decrease the size of a large number of benchmark instances, already solving more than 90 percent of them to optimality. Furthermore, we demonstrate the impact of these techniques on exact solving, using the example of the state-of-the-art Steiner problem solver SCIP-Jack.
The Steiner tree problem in graphs is a classical problem that commonly arises in practical applications as one of many variants. While often a strong relationship between different Steiner tree problem variants can be observed, solution approaches employed so far have been prevalently problem-specific. In contrast, this paper introduces a general-purpose solver that can be used to solve both the classical Steiner tree problem and many of its variants without modification. This versatility is achieved by transforming various problem variants into a general form and solving them by using a state-of-the-art MIP-framework. The result is a high-performance solver that can be employed in massively parallel environments and is capable of solving previously unsolved instances.
The concept of reduction has frequently distinguished itself as a pivotal ingredient of exact solving approaches for the Steiner tree problem in graphs. In this paper we broaden the focus and consider reduction techniques for three Steiner problem variants that have been extensively discussed in the literature and entail various practical applications: The prize-collecting Steiner tree problem, the rooted prize-collecting Steiner tree problem and the maximum-weight connected subgraph problem.
By introducing and subsequently deploying numerous new reduction methods, we are able to drastically decrease the size of a large number of benchmark instances, already solving more than 90 percent of them to optimality. Furthermore, we demonstrate the impact of these techniques on exact solving, using the example of the state-of-the-art Steiner problem solver SCIP-Jack.
Forecasting natural gas demand and supply is essential for an efficient operation of the German gas distribution system and a basis for the operational decisions of the transmission system operators. The German gas market is moving towards more short-term planning, in particular, day-ahead contracts. This increases the difficulty that the operators in the dispatching centre are facing, as well as the necessity of accurate forecasts. This paper presents a novel predictive model that provides day-ahead forecasts of the high resolution gas flow by developing a Functional AutoRegressive model with eXogenous variables (FARX). The predictive model allows the dynamic patterns of hourly gas flows to be described in a wide range of historical profiles, while also taking the relevant determinants data into account. By taking into account a richer set of information, FARX provides stronger performance in real data analysis, with both accuracy and high computational efficiency. Compared to several alternative models in out-of-sample forecasts, the proposed model can improve forecast accuracy by at least 12% and up to 5-fold for one node, 3% to 2-fold and 2-fold to 4-fold for the other two nodes. The results show that lagged 1-day gas flow and nominations are important predictors, and with their presence in the forecast model, temperature becomes insignificant for short-term predictions.
As the natural gas market is moving towards short-term planning, accurate and robust short-term forecasts of the demand and supply of natural gas is of fundamental importance for a stable energy supply, a natural gas control schedule, and transport operation on a daily basis. We propose a hybrid forecast model, Functional AutoRegressive and Convolutional Neural Network model, based on state-of-the-art statistical modeling and artificial neural networks. We conduct short-term forecasting of the hourly natural gas flows of 92 distribution nodes in the German high-pressure gas pipeline network, showing that the proposed model provides nice and stable accuracy for different types of nodes. It outperforms all the alternative models, with an improved relative accuracy up to twofold for plant nodes and up to fourfold for municipal nodes. For the border nodes with rather flat gas flows, it has an accuracy that is comparable to the best performing alternative model.
制約整数計画ソルバ SCIP の並列化
(2013)
制約整数計画(CIP: Constraint Integer Programming)は,制約プログラミング(CP: Constraint Programming),混合整数計画(MIP: Mixed Integer Programming), 充足可能性問題(SAT: Satisfiability Problems)の研究分野におけるモデリング技術と解法を統合している.その結果,制約整数計画は,広いクラスの最適化問題を扱うことができる.SCIP (Solving Constraint Integer Programs)は,CIPを解くソルバとして実装され,Zuse Institute Berlin (ZIB)の研究者を中心として継続的に拡張が続けられている.本論文では,著者らによって開発されたSCIP に対する2種類の並列化拡張を紹介する. 一つは,複数計算ノード間で大規模に並列動作するParaSCIP である. もう一つは,複数コアと共有メモリを持つ1台の計算機上で(スレッド)並列で動作するFiberSCIP である. ParaSCIP は,HLRN IIスーパーコンピュータ上で, 一つのインスタンスを解くために最大7,168 コアを利用した動作実績がある.また,統計数理研究所のFujitsu PRIMERGY RX200S5上でも,最大512コアを利用した動作実績がある.統計数理研究所のFujitsu PRIMERGY RX200S5上 では,これまでに最適解が得られていなかったMIPLIB2010のインスタンスであるdg012142に最適解を与えた.
制約整数計画ソルバ SCIP の並列化
(2013)
制約整数計画(CIP: Constraint Integer Programs)は,制約プログラミング(CP: Constraint Programming),混合整数計画(MIP: Mixed Integer Programming),充足可能性問題(SAT: Satisfability Problem)の研究分野におけるモデリング技術と解法を統合している.その結果,制約整数計画は,広いクラスの最適化問題を扱うことができる.SCIP(Solving Constraint Integer Programs)は,CIP を解くソルバとして実装され,Zuse Institute Berlin(ZIB)の研究者を中心として継続的に拡張が続けられている.本論文では,著者らによって開発された SCIP に対する2 種類の並列化拡張を紹介する.一つは,複数計算ノード間で大規模に並列動作する ParaSCIPである.もう一つは,複数コアと共有メモリを持つ 1 台の計算機上で(スレッド)並列で動作する FiberSCIP である.ParaSCIP は,HLRN II スーパーコンピュータ上で,一つのインスタンスを解くために最大 7,168 コアを利用した動作実績がある.また,統計数理研究所の Fujitsu PRIMERGY RX200S5 上でも,最大 512 コアを利用した動作実績がある.統計数理研究所のFujitsu PRIMERGY RX200S5 上では,これまでに最適解が得られていなかった MIPLIB2010のインスタンスである dg012142 に最適解を与えた.
SCIP-Jack: An exact high performance solver for Steiner tree problems in graphs and related problems
(2021)
The Steiner tree problem in graphs is one of the classic combinatorial
optimization problems. Furthermore, many related problems, such as the rectilinear Steiner tree problem or the maximum-weight connected subgraph problem, have been described in the literature—with a wide range of practical applications. To embrace this wealth of problem classes, the solver SCIP-JACK has been developed as an exact framework for classic Steiner tree and 11 related problems. Moreover,
the solver comes with both shared- and distributed memory extensions by means of the UG framework. Besides its versatility, SCIP-JACK is highly competitive for most of the 12 problem classes it can solve, as for instance demonstrated by its top ranking in the recent PACE 2018 Challenge. This article describes the current state of SCIP-JACK and provides up-to-date computational results, including several instances that can now be solved for the first time to optimality.
Cutting plane selection is a subroutine used in all modern mixed-integer linear programming solvers with the goal of selecting a subset of generated cuts that induce optimal solver performance. These solvers have millions of parameter combinations, and so are excellent candidates for parameter tuning. Cut selection scoring rules are usually weighted sums of different measurements, where the weights are parameters. We present a parametric family of mixed-integer linear programs together with infinitely many family-wide valid cuts. Some of these cuts can induce integer optimal solutions directly after being applied, while others fail to do so even if an infinite amount are applied. We show for a specific cut selection rule, that any finite grid search of the parameter space will always miss all parameter values, which select integer optimal inducing cuts in an infinite amount of our problems. We propose a variation on the design of existing graph convolutional neural networks, adapting them to learn cut selection rule parameters. We present a reinforcement learning framework for selecting cuts, and train our design using said framework over MIPLIB 2017 and a neural network verification data set. Our framework and design show that adaptive cut selection does substantially improve performance over a diverse set of instances, but that finding a single function describing such a rule is difficult. Code for reproducing all experiments is available at https://github.com/Opt-Mucca/Adaptive-Cutsel-MILP.
Linear energy system models are a crucial component of energy system design and operations, as well as energy policy consulting. If detailed enough, such models lead to large-scale linear programs, which can be intractable even for the best state-of-the-art solvers. This article introduces an interior-point solver that exploits common structures of energy system models to efficiently run in parallel on distributed-memory systems. The solver is designed for linear programs with doubly-bordered block-diagonal constraint matrix and makes use of a Schur complement based decomposition. In order to handle the large number of linking constraints and variables commonly observed in energy system models, a distributed Schur complement preconditioner is used. In addition, the solver features a number of more generic techniques such as parallel matrix scaling and structure-preserving presolving. The implementation is based on the solver PIPS-IPM. We evaluate the computational performance on energy system models with up to four billion nonzero entries in the constraint matrix—and up to one billion columns and one billion rows. This article mainly concentrates on the energy system model ELMOD, which is a linear optimization model representing the European electricity markets by the use of a nodal pricing market-clearing. It has been widely applied in the literature on energy system analyses in recent years. However, it will be demonstrated that the new solver is also applicable to other energy system models.
BEAM-ME: Accelerating Linear Energy Systems Models by a Massively Parallel Interior Point Method
(2020)
A decision support system relies on frequent re-solving of similar problem instances. While the general structure remains the same in corresponding applications, the input parameters are updated on a regular basis. We propose a generative neural network design for learning integer decision variables of mixed-integer linear programming (MILP) formulations of these problems. We utilise a deep neural network discriminator and a MILP solver as our oracle to train our generative neural network. In this article, we present the results of our design applied to the transient gas optimisation problem. With the trained network we produce a feasible solution in 2.5s, use it as a warm-start solution, and thereby decrease global optimal solution solve time by 60.5%.
KOBV Jahresbericht 2021-2022
(2023)
Compressor stations are the heart of every high-pressure gas transport network.
Located at intersection areas of the network they are contained in huge complex plants, where they are in combination with valves and regulators responsible for routing and pushing the gas through the network.
Due to their complexity and lack of data compressor stations are usually dealt with in the scientific literature in a highly simplified and idealized manner.
As part of an ongoing project with one of Germany's largest Transmission System Operators to develop a decision support system for their dispatching center, we investigated how to automatize control of compressor stations. Each station has to be in a particular configuration, leading in combination with the other nearby elements to a discrete set of up to 2000 possible feasible operation modes in the intersection area.
Since the desired performance of the station changes over time, the configuration of the station has to adapt.
Our goal is to minimize the necessary changes in the overall operation modes and related elements over time, while fulfilling a preset performance envelope or demand scenario.
This article describes the chosen model and the implemented mixed integer programming based algorithms to tackle this challenge.
By presenting extensive computational results on real world data we demonstrate the performance of our approach.
A decision support system relies on frequent re-solving of similar problem instances. While the general structure remains the same in corresponding applications, the input parameters are updated on a regular basis. We propose a generative neural network design for learning integer decision variables of mixed-integer linear programming (MILP) formulations of these problems. We utilise a deep neural network discriminator and a MILP solver as our oracle to train our generative neural network. In this article, we present the results of our design applied to the transient gas optimisation problem. With the trained network we produce a feasible solution in 2.5s, use it as a warm-start solution, and thereby decrease global optimal solution solve time by 60.5%.
KOBV Jahresbericht 2019-2020
(2021)
Der aktuelle KOBV-Jahresbericht informiert darüber, was in den Mitgliedsbibliotheken und Partnerprojekten in den letzten beiden Jahren passiert ist und was sich in der Verbundzentrale und in der Bibliothekslandschaft ändert. Die Ausgabe 2019/2020 enthält den Schwerpunktteil »Digitalisierung« mit verschiedenen Perspektiven auf die digitale Arbeitswelt.
Die europaische Gasinfrastruktur wird disruptiv in ein zukunftiges dekarbonisiertes Energiesystem verändert; ein Prozess, der angesichts der jüngsten politischen Situation beschleunigt werden muss. Mit einem wachsenden Wasserstoffmarkt wird der pipelinebasierte Transport unter Nutzung der bestehenden Erdgasinfrastruktur wirtschaftlich sinnvoll, trägt zur Erhöhung der öffentlichen Akzeptanz bei und beschleunigt den Umstellungsprozess. In diesem Beitrag wird die maximal technisch machbare Einspeisung von Wasserstoff in das bestehende deutsche Erdgastransportnetz hinsichtlich regulatorischer Grenzwerte der Gasqualität analysiert. Die Analyse erfolgt auf Basis eines transienten Tracking-Modells, das auf dem allgemeinen Pooling-Problem einschließlich Linepack aufbaut. Es zeigt sich, dass das Gasnetz auch bei strengen Grenzwerten gen ̈ugend Kapazität bietet, um für einen großen Teil der bis 2030 geplanten Erzeugungskapazität für grünen Wasserstoff als garantierter Abnehmer zu dienen.
Die europäische Gasinfrastruktur wird disruptiv in ein zukünftiges dekarbonisiertes Energiesystem verändert; ein Prozess, der angesichts der jüngsten politischen Situation beschleunigt werden muss. Mit einem wachsenden Wasserstoffmarkt wird der pipelinebasierte Transport unter Nutzung der bestehenden Erdgasinfrastruktur wirtschaftlich sinnvoll, trägt zur Erhöhung der öffentlichen Akzeptanz bei und beschleunigt den Umstellungsprozess. In diesem Fachbeitrag wird die maximal technisch machbare Einspeisung von Wasserstoff in das bestehende deutsche Erdgastransportnetz hinsichtlich regulatorischer Grenzwerte der Gasqualität analysiert. Die Analyse erfolgt auf Basis eines transienten Tracking-Modells, das auf dem allgemeinen Pooling-Problem
einschließlich Linepack aufbaut. Es zeigt sich, dass das Gasnetz auch bei strengen Grenzwerten genügend Kapazität bietet, um für einen großen Teil der bis 2030 geplanten Erzeugungskapazität für grünen Wasserstoff als garantierter Abnehmer zu dienen.
The maximum-cut problem is one of the fundamental problems in combinatorial optimization. With the advent of quantum computers, both the maximum-cut and the equivalent quadratic unconstrained binary optimization problem have experienced much interest in recent years. This article aims to advance the state of the art in the exact solution of both problems—by using mathematical programming techniques. The main focus lies on sparse problem instances, although also dense ones can be solved. We enhance several algorithmic components such as reduction techniques and cutting-plane separation algorithms, and combine them in an exact branch-and-cut solver. Furthermore, we provide a parallel implementation. The new solver is shown to significantly outperform existing state-of-the-art software for sparse maximum-cut and quadratic unconstrained binary optimization instances. Furthermore, we improve the best known bounds for several instances from the 7th DIMACS Challenge and the QPLIB, and solve some of them (for the first time) to optimality.
DeepGreen ist ein Service, der es teilnehmenden institutionellen Open-Access-Repositorien,Open-Access-Fachrepositorien und Forschungsinformationssystemen erleichtert, für sie relevante Verlagspublikationen in zyklischer Abfolge mithilfe von Schnittstellen Open Access zur Verfügung zu stellen. Die entsprechende Bandbreite an Relationen zwischen den Akteuren, diverse lizenzrechtliche Rahmenbedingungen sowie technische Anforderungen gestalten das Thema komplex. Ziel dieser Handreichung ist es, neben all diesen Themen, die begleitend beleuchtet werden, im Besonderen Empfehlungen für die reibungslose Nutzung der Datenübertragung zu liefern. Außerdem werden mithilfe einer vorangestellten Workflow- Evaluierung Unterschiede und Besonderheiten in den Arbeitsschritten bei institutionellen Open-Access-Repositorien und Open-Access-Fachrepositorien aufgezeigt und ebenfalls mit Empfehlungen angereichert.
About 20% of the German energy demand is supplied by natural gas. Ad-
ditionally, for about twice the amount Germany serves as a transit country. Thereby, the German network represents a central hub in the European natural gas transport network. The transport infrastructure is operated by so-called transmissions system operators or TSOs. The number one priority of the TSOs is to ensure security of supply. However, the TSOs have no knowledge of the intentions and planned actions of the shippers (traders).
Open Grid Europe (OGE), one of Germany’s largest TSO, operates a high-
pressure transport network of about 12.000 km length. Since flexibility and security of supply is of utmost importance to the German Energy Transition (“Energiewende”) especially with the introduction of peak-load gas power stations, being able to predict in- and out-flow of the network is of great importance.
In this paper we introduce a new hybrid forecast method applied to gas
flows at the boundary nodes of a transport network. The new method employs optimized feature minimization and selection. We use a combination of an FAR, LSTM DNN and mathematical programming to achieve robust high quality forecasts on real world data for different types of network nodes.
Keywords: Gas Forecast, Time series, Hybrid Method, FAR, LSTM, Mathematical Optimisation