Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)

Learning and Propagating Lagrangian Variable Bounds for Mixed-Integer Nonlinear Programming

  • Optimization-based bound tightening (OBBT) is a domain reduction technique commonly used in nonconvex mixed-integer nonlinear programming that solves a sequence of auxiliary linear programs. Each variable is minimized and maximized to obtain the tightest bounds valid for a global linear relaxation. This paper shows how the dual solutions of the auxiliary linear programs can be used to learn what we call Lagrangian variable bound constraints. These are linear inequalities that explain OBBT's domain reductions in terms of the bounds on other variables and the objective value of the incumbent solution. Within a spatial branch-and-bound algorithm, they can be learnt a priori (during OBBT at the root node) and propagated within the search tree at very low computational cost. Experiments with an implementation inside the MINLP solver SCIP show that this reduces the number of branch-and-bound nodes and speeds up solution times.

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics - number of accesses to the document
Metadaten
Author:Ambros GleixnerORCiD, Stefan Weltge
Document Type:In Proceedings
Parent Title (English):Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, 10th International Conference, CPAIOR 2013, Yorktown Heights, NY, USA, May 18-22, 2013
Volume:7874
First Page:355
Last Page:361
Series:Lecture Notes in Computer Science
Year of first publication:2013
Preprint:urn:nbn:de:0297-zib-17631
DOI:https://doi.org/10.1007/978-3-642-38171-3_26
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.