The 10 most recently published documents
We present a new algorithm Directional Optimization Search with Surrogate (DOSS), for optimizing problems with box constraints and a computationally expensive black-box objective function. DOSS is a radial basis function (RBF) based method that mainly focuses on higher dimensional and computationally expensive objective functions that can be multimodal. DOSS introduces three new techniques not previously used in earlier RBF algorithms, including using a combination of the coordinates knowledge level, fewer initial sampling points, and the surrogate’s gradient information. Numerical results on a test set including 14 test problems with 36, 48, and 60 dimensions show that DOSS outperforms two recently published algorithms RBFOpt and TuRBO, and earlier RBF algorithms such as DYCORS. TuRBO is a Gaussian process based optimization algorithm, which outperformed earlier state-of-the-art methods. DOSS algorithm also has a good performance on real-world optimization problems, including robot pushing and rover trajectory planning problems. Almost sure convergence for the DOSS algorithm is also proven in this paper. An implementation of DOSS is available at https://doi.org/10.5281/zenodo.13731558.
Conjugate gradient minimization methods (CGM) and their accelerated variants are widely used. We focus on the use of cubic regularization to improve the CGM direction independent of the step length computation. In this paper, we propose the Hybrid Cubic Regularization of CGM, where regularized steps are used selectively. Using Shanno’s reformulation of CGM as a memoryless BFGS method, we derive new formulas for the regularized step direction. We show that the regularized step direction uses the same order of computational burden per iteration as its non-regularized version. Moreover, the Hybrid Cubic Regularization of CGM exhibits global convergence with fewer assumptions. In numerical experiments, the new step directions are shown to require fewer iteration counts, improve runtime, and reduce the need to reset the step direction. Overall, the Hybrid Cubic Regularization of CGM exhibits the same memoryless and matrix-free properties, while outperforming CGM as a memoryless BFGS method in iterations and runtime.
Hougardy and Schroeder (WG 2014) proposed a combinatorial technique for pruning the search space in the traveling salesman problem, establishing that, for a given instance, certain edges cannot be present in any optimal tour. We describe an implementation of their technique, employing an exact TSP solver to locate k-opt moves in the elimination process. In our computational study, we combine LP reduced-cost elimination together with the new combinatorial algorithm. We report results on a set of geometric instances, with the number of points n ranging from 3038 up to 115,475. The test set includes all TSPLIB instances having at least 3000 points, together with 250 randomly generated instances, each with 10,000 points, and three currently unsolved instances having 100,000 or more points. In all but two of the test instances, the complete-graph edge sets were reduced to under 3n edges. For the three large unsolved instances, repeated runs of the elimination process reduced the graphs to under 2.5n edges.
Time transformation is a ubiquitous tool in theoretical sciences, especially in physics. It can also be used to transform switched optimal control problems into control problems with a fixed switching order and purely continuous decisions. This approach is known either as enhanced time transformation, time-scaling, or switching time optimization (STO) for mixed-integer optimal control. The approach is well understood and used widely due to its many favorable properties. Recently, several extensions and algorithmic improvements have been proposed. We use an alternative formulation, the partial outer convexification (POC), to study convergence properties of (STO). We introduce the open-source software package _ (Sager et al., czeile/ampl_mintoc: Math programming c release, 2024, https://doi.org/10.5281/zenodo.12520490). It is based on AMPL, designed for the formulation of mixed-integer optimal control problems, and allows to use almost identical implementations for (STO) and (POC). We discuss and explain our main numerical result: (STO) is likely to result in more local minima for each discretization grid than (POC), but the number of local minima is asymptotically identical for both approaches.
The late Professor M. J. D. Powell devised five trust-region methods for derivative-free optimization, namely COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA. He carefully implemented them into publicly available solvers, renowned for their robustness and efficiency. However, the solvers were implemented in Fortran 77 and hence may not be easily accessible to some users. We introduce the PDFO package, which provides user-friendly Python and MATLAB interfaces to Powell’s code. With PDFO, users of such languages can call Powell’s Fortran solvers easily without dealing with the Fortran code. Moreover, PDFO includes bug fixes and improvements, which are particularly important for handling problems that suffer from ill-conditioning or failures of function evaluations. In addition to the PDFO package, we provide an overview of Powell’s methods, sketching them from a uniform perspective, summarizing their main features, and highlighting the similarities and interconnections among them. We also present experiments on PDFO to demonstrate its stability under noise, tolerance of failures in function evaluations, and potential to solve certain hyperparameter optimization problems.
We study a primal-dual interior point method specialized to clustered low-rank semidefinite programs requiring high precision numerics, which arise from certain multivariate polynomial (matrix) programs through sums-of-squares characterizations and sampling. We consider the interplay of sampling and symmetry reduction as well as a greedy method to obtain numerically good bases and sample points. We apply this to the computation of three-point bounds for the kissing number problem, for which we show a significant speedup. This allows for the computation of improved kissing number bounds in dimensions 11 through 23. The approach performs well for problems with bad numerical conditioning, which we show through new computations for the binary sphere packing problem.
In this paper, we introduce a new effective matrix adaptation evolution strategy
(MADFO) for noisy derivative-free optimization problems. Like every MAES solver,
MADFO consists of three phases: mutation, selection and recombination. MADFO
improves the mutation phase by generating good step sizes, neither too small not
too large, that increase the probability of selecting mutation points with small inexact
function values in the selection phase. In the recombination phase, a recombination
point with lowest inexact function value found among all evaluated points so far may
be found by a new randomized non-monotone line search method and accepted as
the best point. If no best point is found, a heuristic point may be accepted as the best
point. We compare MADFO with state-of-the-art DFO solvers on noisy test problems
obtained by adding various kinds and levels of noise to all unconstrained CUTEst
test problems with dimensions n ≤ 20, and find that MADFO has the highest number
of solved problems
Model training algorithms which observe a small portion of the training set in each
computational step are ubiquitous in practical machine learning, and include both
stochastic and online optimization methods. In the vast majority of cases, such algo-
rithms typically observe the training samples via the gradients of the cost functions
the samples incur. Thus, these methods exploit are the slope of the cost functions via
their first-order approximations. To address limitations of gradient-based methods,
such as sensitivity to step-size choice in the stochastic setting, or inability to exploit
small function variability in the online setting, several streams of research attempt
to exploit more information about the cost functions than just their gradients via the
well-known proximal operators. However, implementing such methods in practice
poses a challenge, since each iteration step boils down to computing the proximal
operator, which may not be as easy as computing a gradient. In this work we devise
a novel algorithmic framework, which exploits convex duality theory to achieve both
algorithmic efficiency and software modularity of proximal operator implementations,
in order to make experimentation with incremental proximal optimization algorithms
accessible to a larger audience of researchers and practitioners, by reducing the gap
between their theoretical description in research papers and their use in practice.
We provide a reference Python implementation for the framework developed in this
paper as an open source library at on GitHub (https://github.com/alexshtf/inc_prox_
pt/releases/tag/prox_pt_paper) Shtoff (Efficient implementation of incremental proxi-
mal point methods arXiv:2205.01457, 2024), along with examples which demonstrate
our implementation on a variety of problems, and reproduce the numerical experiments
in this paper. The pure Python reference implementation is not necessarily the most
efficient, but is a basis for creating efficient implementations by combining Python
with a native backend.
PEPit is a python package aiming at simplifying the access to worst-case analy-
ses of a large family of first-order optimization methods possibly involving gradient,
projection, proximal, or linear optimization oracles, along with their approximate, or
Bregman variants. In short, PEPit is a package enabling computer-assisted worst-
case analyses of first-order optimization methods. The key underlying idea is to
cast the problem of performing a worst-case analysis, often referred to as a perfor-
mance estimation problem (PEP), as a semidefinite program (SDP) which can be
solved numerically. To do that, the package users are only required to write first-order
methods nearly as they would have implemented them. The package then takes care
of the SDP modeling parts, and the worst-case analysis is performed numerically
via standard solvers.