B06
Branch-and-cut for mixed-integer robust chance-constrained optimization with discrete distributions
(2025)
We study robust chance-constrained problems with mixed-integer design variables and ambiguity sets consisting of discrete probability distributions. Allowing general non-convex constraint functions, we develop a branch-and-cut framework using scenario-based cutting planes to generate lower bounds. The cutting planes are obtained by exploiting the classical big-M reformulation of the chance-constrained problem in the case of discrete distributions. Furthermore, we include the calculation of initial feasible solutions based on a bundle method applied to an approximation of the original problem into the branch-and-cut procedure. We conclude with a detailed discussion about the practical performance of the branch-and-cut framework with and without initial feasible solutions. In our experiments we focus on gas transport problems under uncertainty and provide a comparison of our method with solving the classical reformulation directly for various real-world sized instances.
We propose a novel online learning framework for robust Bayesian optimization of uncertain black-box functions. While Bayesian optimization is well-suited for data-efficient optimization of expensive objectives, its standard form can be sensitive to hidden or varying parameters. To address this issue, we consider a min–max robust counterpart of the optimization problem and develop a practically efficient solution algorithm, BROVER (Bayesian Robust Optimization via Exploration with Regret minimization). Our method combines Gaussian process regression with a decomposition approach: the minimax structure is split into a non-convex online learner based on the Follow-the-Perturbed-Leader algorithm together with a subsequent minimization step in the decision variables. We prove that the theoretical regret bound converges under mild assumptions, ensuring asymptotic convergence to robust solutions. Numerical experiments on synthetic data validate the regret guarantees and demonstrate fast convergence to the robust optimum. Furthermore, we apply our method to the robust optimization of organic solar cell performance, where hidden process parameters and experimental variability naturally induce uncertainty. Our results on real-world datae show that BROVER identifies solutions with strong robustness properties within relatively few iterations, thereby offering a modern and practical approach for data-driven black-box optimization under uncertainty.
Constructing ambiguity sets in distributionally robust optimization is difficult and currently receives increased attention. In this paper, we focus on mixture models with finitely many reference distributions. We present two different solution concepts for robust joint chance-constrained optimization problems with these ambiguity sets and non-convex constraint functions. Both concepts rely on solving an approximation problem that is based on well-known smoothing and penalization techniques. On the one side, we consider a classical bundle method together with an approach for finding good starting points. On the other side, we integrate the Continuous Stochastic Gradient method, a variant of the stochastic gradient descent that is able to exploit regularity in the data. On the example of gas networks we compare the two algorithmic concepts for different topologies and two types of mixture ambiguity sets with Gaussian reference distributions and polyhedral and ϕ-divergence based feasible sets for the mixing coefficients. The results show that both solution approaches are well-suited to solve this difficult problem class. Based on the numerical results we provide some general advices for choosing the more efficient algorithm depending on the main challenges of the considered optimization problem. We give an outlook for the applicability of the method in a wider context.
Mathematical optimization, although often leading to NP-hard models, is now capable of solving even large-scale instances within reasonable time. However, the primary focus is often placed
solely on optimality. This implies that while obtained solutions are globally optimal, they are frequently not comprehensible to humans, in particular when obtained by black-box routines. In contrast, explainability is a standard requirement for results in Artificial Intelligence, but it is rarely considered in optimization yet. There are only a few studies that aim to find solutions that are both of high quality and explainable. In recent work, explainability for optimization was defined in a data-driven manner: a solution is considered explainable if it closely resembles solutions that have been used in the past under similar circumstances. To this end, it is crucial to identify a preferably small subset of features from a presumably large set that can be used to explain a solution. In mathematical optimization, feature selection has received little attention yet. In this work, we formally define the feature selection problem for explainable optimization and prove that its decision version is NP-complete. We introduce mathematical models for optimized feature selection. As their global solution requires significant computation time with modern mixed-integer linear solvers, we employ local heuristics. Our computational study using data that reflect real-world scenarios demonstrates that the problem can be solved practically efficiently for instances of reasonable size.
Stochastic and (distributionally) robust optimization problems often become computationally challenging as the number of scenarios increases. Scenario reduction is therefore a key technique for improving tractability. We introduce a general scenario reduction method for distributionally robust optimization (DRO), which includes stochastic and robust optimization as special cases. Our approach constructs the reduced DRO problem by projecting the original ambiguity set onto a reduced set of scenarios. Under mild conditions, we establish bounds on the relative quality of the reduction. The methodology is applicable to random variables following either discrete or continuous probability distributions, with representative scenarios appropriately selected in both cases. Given the relevance of optimization problems with linear
and quadratic objectives, we further refine our approach for these settings. Finally, we demonstrate its effectiveness through numerical experiments on mixed-integer benchmark instances from MIPLIB and portfolio optimization problems. Our results show that the oroposed approximation significantly reduces solution time while maintaining high solution quality with only minor errors.
Typically, probability distributions that generate uncertain parameters cannot be measured exactly in practice. As a remedy, distributional robustness determines optimized decisions that are protected in a robust fashion against all probability distributions in some appropriately chosen ambiguity set. In this work, we consider robust joint chance-constrained optimization problems and focus on discrete probability distributions. Many methods for this kind of problems study convex or even linear constraint functions. In contrast, we introduce a practically efficient scenario-based bundle method without convexity assumptions on the constraint functions. We start by deriving an approximation problem to the original robust chance-constrained version by using smoothing and penalization techniques that build on our former work on chance-constrained optimization. Our convergence results with respect to the smoothing approximation and well-known results for penalty approximations suggest replacing the original problem with the approximation problem for large smoothing and penalty parameters. Our scenario-based bundle method starts by solving the approximation problem with a bundle method, and then uses the bundle solution to decide which scenarios to include in a scenario-expanded formulation. This formulation is a standard nonlinear optimization problem. Our approach is guaranteed to find feasible solutions. Furthermore, in the numerical experiments on real-world gas transport problems with uncertain demands, we mostly find globally optimal solutions. Comparing these results to the classical robust reformulations for ambiguity sets consisting of confidence intervals and Wasserstein balls, we observe that the scenario-based bundle method typically outperforms solving the classical reformulation directly.
Outer approximation for generalized convex mixed-integer nonlinear robust optimization problems
(2024)
We consider mixed-integer nonlinear robust optimization problems with nonconvexities. In detail, the functions can be nonsmooth and generalized convex, i.e., f°-quasiconvex or f°-pseudoconvex. We propose a robust optimization method that requires no certain structure of the adversarial problem, but only approximate worst-case evaluations. The method integrates a bundle method, for continuous subproblems, into an outer approximation approach. We prove that our algorithm converges and finds an approximately robust optimal solution and propose robust gas transport as a suitable application.
A Gradient-Based Method for Joint Chance-Constrained Optimization with Continuous Distributions
(2024)
The input parameters of an optimization problem are often affected by uncertainties. Chance constraints are a common way to model stochastic uncertainties in the constraints. Typically, algorithms for solving chance-constrained problems require convex functions or discrete probability distributions. In this work, we go one step further and allow non-convexities as well as continuous distributions. We propose a gradient-based approach to approximately solve joint chance-constrained models. We approximate the original problem by smoothing indicator functions. Then, the smoothed chance constraints are relaxed by penalizing their violation in the objective function. The approximation problem is solved with the Continuous Stochastic Gradient method that is an enhanced version of the stochastic gradient descent and has recently been introduced in the literature. We present a convergence theory for the smoothing and penalty approximations. Under very mild assumptions, our approach is applicable to a wide range of chance-constrained optimization problems. As an example, we illustrate its computational efficiency on difficult practical problems arising in the operation of gas networks. The numerical experiments demonstrate that the approach quickly finds nearly feasible solutions for joint chance-constrained problems with non-convex constraint functions and continuous distributions, even for realistically-sized instances.
Stochastic Optimization (SO) is a classical approach for optimization under uncertainty that typically requires knowledge about the probability distribution of uncertain parameters. As the latter is often unknown, Distributionally Robust Optimization (DRO) provides a strong alternative that determines the best guaranteed solution over a set of distributions (ambiguity set). In this work, we present an approach for DRO over time that uses online learning and scenario observations arriving as a data stream to learn more about the uncertainty. Our robust solutions adapt over time and reduce the cost of protection with shrinking ambiguity. For various kinds of ambiguity sets, the robust solutions converge to the SO solution. Our algorithm achieves the optimization and learning goals without solving the DRO problem exactly at any step. We also provide a regret bound for the quality of the online strategy which converges at a rate of $ O(\log T / \sqrt{T})$, where $T$ is the number of iterations. Furthermore, we illustrate the effectiveness of our procedure by numerical experiments on mixed-integer optimization instances from popular benchmark libraries and give practical examples stemming from telecommunications and routing. Our algorithm is able to solve the DRO over time problem significantly faster than standard reformulations.
In many real-world mixed-integer optimisation problems from engineering, the side
constraints can be subdivided into two categories: constraints which describe a certain logic to model a feasible allocation of resources (such as a maximal number of available assets, working time requirements, maintenance requirements, contractual obligations, etc.),
and constraints which model physical processes and the related quantities (such as current,
pressure, temperature, etc.). While the first type of constraints can often easily be stated in
terms of a mixed-integer program (MIP), the second part may involve the incorporation of
complex non-linearities, partial differential equations or even a black-box simulation of the
involved physical process. In this work, we propose the integration of a trained tree-based
classifier – a decision-tree or a random forest, into a mixed-integer optimization model as a
possible remedy. We assume that the classifier has been trained on data points produced
by a detailed simulation of a given complex process to represent the functional relationship
between the involved physical quantities. We then derive MIP-representable reformulations
of the trained classifier such that the resulting model can be solved using state-of-the-art
solvers. At the hand of several use cases in terms of possible optimisation goals, we show
the broad applicability of our framework that is easily extendable to other tasks beyond
engineering. In a detailed real-world computational study for the design of stable direct-
current power networks, we demonstrate that our approach yields high-quality solutions
in reasonable computation times.