Refine
Year of publication
Document Type
- ZIB-Report (40)
- In Proceedings (17)
- Article (11)
- In Collection (2)
- Book chapter (1)
- Master's Thesis (1)
Keywords
- constraint integer programming (10)
- cumulative constraint (6)
- mixed integer programming (6)
- constraint programming (5)
- MIP (4)
- Mixed Integer Programming (4)
- branch-and-cut (4)
- integer programming (4)
- Online Optimization (3)
- Pseudo-Boolean (3)
Institute
- Mathematical Optimization (61)
- Mathematical Optimization Methods (28)
- ZIB Allgemein (4)
- Digital Data and Information for Society, Science, and Culture (3)
- KOBV (2)
- Visual Data Analysis (2)
- Visual and Data-centric Computing (2)
- Applied Algorithmic Intelligence Methods (1)
- Computational Systems Biology (1)
- Numerical Mathematics (1)
Mit DeepGreen wurde eine Infrastruktur aufgebaut und etabliert, die Zeitschriftenartikel von wissenschaftlichen Verlagen abholt und berechtigten Bibliotheken zur Veröffentlichung in ihren Repositorien sendet. DeepGreen unterstützt Bibliotheken als Dienstleister für Hochschulen, außeruniversitäre Einrichtungen und die dort tätigen Wissenschaftler*innen, Publikationen auf Open-Access-Repositorien frei zugänglich zu machen und fördert das Zusammenspiel von wissenschaftlichen Einrichtungen und Verlagen. DeepGreen wurde von Januar 2016 bis Juni 2021 von der Deutschen Forschungsgemeinschaft gefördert und wird nun vom Kooperativen Bibliotheksverbund Berlin-Brandenburg, von der Bayerischen Staatsbibliothek und von der Universitätsbibliothek Erlangen-Nürnberg in arbeitsteiliger Eigenleistung für zwei Jahre weiterbetrieben. Der vorliegende Beitrag beleuchtet vielfältige Aspekte bei der Realisierung von DeepGreen und geht auf die Perspektiven dieser zentralen Open-Access-Infrastruktur für deutsche Wissenschaftseinrichtungen ein.
DeepGreen wurde vom 01.08.2018 bis zum 30.06.2021 in einer zweiten Projektphase von der Deutschen Forschungsgemeinschaft (DFG) gefördert. DeepGreen unterstützt Bibliotheken als Dienstleister für Hochschulen, außeruniversitäre Forschungseinrichtungen und die dort tätigen Wissenschaftler:innen dabei, Publikationen auf Open-Access-Repositorien frei zugänglich zu machen und fördert das Zusammenspiel von wissenschaftlichen Einrichtungen und Verlagen. An der zweiten Projektphase waren der Kooperative Bibliotheksverbund Berlin-Brandenburg, die Bayerische Staatsbibliothek, der Bibliotheksverbund Bayern, die Universitätsbibliotheken der Friedrich-Alexander-Universität Erlangen-Nürnberg und der Technischen Universität Berlin und das Helmholtz Open Science Office beteiligt. In dem Projekt wurde erfolgreich eine technische und organisatorische Lösung zur automatisierten Verteilung von Artikeldaten wissenschaftlicher Verlage an institutionelle und fachliche Repositorien entwickelt. In der zweiten Projektphase lag der Fokus auf der Erprobung der Datendrehscheibe in der Praxis und der Ausweitung auf weitere Datenabnehmer und weitere Verlage. Im Anschluss an die DFG-geförderte Projektlaufzeit ist DeepGreen in einen zweijährigen Pilotbetrieb übergegangen. Ziel des Pilotbetriebs ist es, den Übergang in einen bundesweiten Real-Betrieb vorzubereiten.
DeepGreen ist ein Service, der es teilnehmenden institutionellen Open-Access-Repositorien,Open-Access-Fachrepositorien und Forschungsinformationssystemen erleichtert, für sie relevante Verlagspublikationen in zyklischer Abfolge mithilfe von Schnittstellen Open Access zur Verfügung zu stellen. Die entsprechende Bandbreite an Relationen zwischen den Akteuren, diverse lizenzrechtliche Rahmenbedingungen sowie technische Anforderungen gestalten das Thema komplex. Ziel dieser Handreichung ist es, neben all diesen Themen, die begleitend beleuchtet werden, im Besonderen Empfehlungen für die reibungslose Nutzung der Datenübertragung zu liefern. Außerdem werden mithilfe einer vorangestellten Workflow- Evaluierung Unterschiede und Besonderheiten in den Arbeitsschritten bei institutionellen Open-Access-Repositorien und Open-Access-Fachrepositorien aufgezeigt und ebenfalls mit Empfehlungen angereichert.
Mixed-integer programming (MIP) problem is arguably among the hardest classes of optimization problems. This paper describes how we solved 21 previously unsolved MIP instances from the MIPLIB benchmark sets. To achieve these results we used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper, we describe the basic parallelization mechanism of ParaSCIP, improvements of the dynamic load balancing and novel techniques to exploit the power of parallelization for MIP solving. We give a detailed overview of computing times and statistics for solving open MIPLIB instances.
Primal heuristics play an important role in the solving of mixed integer programs (MIPs). They often provide good feasible solutions early and help to reduce the time needed to prove optimality. In this paper, we present a scheme for start heuristics that can be executed without previous knowledge of an LP solution or a previously found integer feasible solution. It uses global structures available within MIP solvers to iteratively fix integer variables and propagate these fixings. Thereby, fixings are determined based on the predicted impact they have on the subsequent domain propagation. If sufficiently many variables can be fixed that way, the resulting problem is solved first as an LP, and then as an auxiliary MIP if the rounded LP solution does not provide a feasible solution already. We present three primal heuristics that use this scheme based on different global structures. Our computational experiments on standard MIP test sets show that the proposed heuristics find solutions for about 60 % of the instances and by this, help to improve several performance measures for MIP solvers, including the primal integral and the average solving time.
The analysis of infeasible subproblems plays an important role in solving mixed integer programs (MIPs) and is implemented in most major MIP solvers. There are two fundamentally different concepts to generate valid global constraints from infeasible subproblems. The first is to analyze the sequence of implications, obtained by domain propagation, that led to infeasibility. The result of this analysis is one or more sets of contradicting variable bounds from which so-called conflict constraints can be generated. This concept is called conflict graph analysis and has its origin in solving satisfiability problems and is similarly used in constraint programming. The second concept is to analyze infeasible linear programming (LP) relaxations. Every ray of the dual LP provides a set of multipliers that can be used to generate a single new globally valid linear constraint. This method is called dual proof analysis. The main contribution of this paper is twofold. Firstly, we present three enhancements of dual proof analysis: presolving via variable cancellation, strengthening by applying mixed integer rounding functions, and a filtering mechanism. Further, we provide an intense computational study evaluating the impact of every presented component regarding dual proof analysis. Secondly, this paper presents the first integrated approach to use both conflict graph and dual proof analysis simultaneously within a single MIP solution process. All experiments are carried out on general MIP instances from the standard public test set MIPLIB 2017; the presented algorithms have been implemented within the non-commercial MIP solver SCIP and the commercial MIP solver FICO Xpress.
Mixed integer nonlinear programs (MINLPs) are arguably among the hardest optimization problems, with a wide range of applications. MINLP solvers that are based on linear relaxations and spatial branching work similar as mixed integer programming (MIP) solvers in the sense that they are based on a branch-and-cut algorithm, enhanced by various heuristics, domain propagation, and presolving techniques. However, the analysis of infeasible subproblems, which is an important component of most major MIP solvers, has been hardly studied in the context of MINLPs. There are two main approaches for infeasibility analysis in MIP solvers: conflict graph analysis, which originates from artificial intelligence and constraint programming, and dual ray analysis.
The main contribution of this short paper is twofold. Firstly, we present the first computational study regarding the impact of dual ray analysis on convex and nonconvex MINLPs. In that context, we introduce a modified generation of infeasibility proofs that incorporates linearization cuts that are only locally valid. Secondly, we describe an extension of conflict analysis that works directly with the nonlinear relaxation of convex MINLPs instead of considering a linear relaxation. This is work-in-progress, and this short paper is meant to present first theoretical considerations without a computational study for that part.
Computing hardware has mostly thrashed out the physical limits for speeding up individual computing cores. Consequently, the main line of progress for new hardware is growing the number of computing cores within a single CPU. This makes the study of efficient parallelization schemes for computation-intensive algorithms more and more important. A natural precondition to achieving reasonable speedups from parallelization is maintaining a high workload of the available computational resources. At the same time, reproducibility and reliability are key requirements for software that is used in industrial applications. In this paper, we present the new parallelization concept for the state-of-the-art MIP solver FICO Xpress-Optimizer. MIP solvers like Xpress are expected to be deterministic. This inevitably results in synchronization latencies which render the goal of a satisfying workload a challenge in itself. We address this challenge by following a partial information approach and separating the concepts of simultaneous tasks and independent threads from each other. Our computational results indicate that this leads to a much higher CPU workload and thereby to an improved, almost linear, scaling on modern high-performance CPUs. As an added value, the solution path that Xpress takes is not only deterministic in a fixed environment, but also, to a certain extent, thread-independent. This paper is an extended version of Berthold et al. [Parallelization of the FICO Xpress-Optimizer, in Mathematical Software – ICMS 2016: 5th International Conference, G.-M. Greuel, T. Koch, P. Paule, and A. Sommere, eds., Springer International Publishing, Berlin, 2016, pp. 251–258] containing more detailed technical descriptions, illustrative examples and updated computational results.
The Ubiquity Generator (UG) is a general framework for the external parallelization of mixed integer programming (MIP) solvers. In this paper, we present ParaXpress, a distributed memory parallelization of the powerful commercial MIP solver FICO Xpress. Besides sheer performance, an important feature of Xpress is that it provides an internal parallelization for shared memory systems. When aiming for a best possible performance of ParaXpress on a supercomputer, the question arises how to balance the internal Xpress parallelization and the external parallelization by UG against each other. We provide computational experiments to address this question and we show computational results for running ParaXpress on a Top500 supercomputer, using up to 43,344 cores in parallel.
Recently, parallel computing environments have become significantly popular. In order to obtain the benefit of using parallel computing environments, we have to deploy our programs for these effectively. This paper focuses on a parallelization of SCIP (Solving Constraint Integer Programs), which is a mixed-integer linear programming solver and constraint integer programming framework available in source code. There is a parallel extension of SCIP named ParaSCIP, which parallelizes SCIP on massively parallel distributed memory computing environments. This paper describes FiberSCIP, which is yet another parallel extension of SCIP to utilize multi-threaded parallel computation on shared memory computing environments, and has the following contributions: First, we present the basic concept of having two parallel extensions, and the relationship between them and the parallelization framework provided by UG (Ubiquity Generator), including an implementation of deterministic parallelization. Second, we discuss the difficulties in achieving a good performance that utilizes all resources on an actual computing environment, and the difficulties of performance evaluation of the parallel solvers. Third, we present a way to evaluate the performance of new algorithms and parameter settings of the parallel extensions. Finally, we demonstrate the current performance of FiberSCIP for solving mixed-integer linear programs (MIPs) and mixed-integer nonlinear programs (MINLPs) in parallel.