Single and joint chance-constrained optimization with continuous distributions

  • The input parameters of an optimization problem are often affected by uncertainties. Chance constraints are a common way to model stochastic uncertainties in the constraints. Typically, algorithms for solving chance-constrained problems require convex functions or discrete probability distributions. In this work, we go one step further and allow non-convexities as well as continuous distributions. We propose a gradient-based approach to approximately solve joint chance-constrained models. We approximate the original problem by smoothing indicator functions. Then, the smoothed chance constraints are relaxed by penalizing their violation in the objective function. The approximation problem is solved with the Continuous Stochastic Gradient method that is an enhanced version of the stochastic gradient descent and has recently been introduced in the literature. We present a convergence theory for the smoothing and penalty approximations. Under very mild assumptions, our approach is applicable to a wide range of chance-constrained optimization problems. As an example, we illustrate its computational efficiency on difficult practical problems arising in the operation of gas networks. The numerical experiments demonstrate that the approach quickly finds nearly feasible solutions for joint chance-constrained problems with non-convex constraint functions and continuous distributions, even for realistically-sized instances.

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Daniela Bernhard, Frauke Liers, Michael Stingl
Document Type:Preprint
Language:English
Date of Publication (online):2024/02/13
Release Date:2024/02/13
Subprojects:B06
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International