TY - JOUR A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - Second-order Conditional Gradient Sliding N2 - Constrained second-order convex optimization algorithms are the method of choice when a high accuracy solution to a problem is needed, due to their local quadratic convergence. These algorithms require the solution of a constrained quadratic subproblem at every iteration. We present the \emph{Second-Order Conditional Gradient Sliding} (SOCGS) algorithm, which uses a projection-free algorithm to solve the constrained quadratic subproblems inexactly. When the feasible region is a polytope the algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations. Once in the quadratic regime the SOCGS algorithm requires O(log(log1/ε)) first-order and Hessian oracle calls and O(log(1/ε)log(log1/ε)) linear minimization oracle calls to achieve an ε-optimal solution. This algorithm is useful when the feasible region can only be accessed efficiently through a linear optimization oracle, and computing first-order information of the function, although possible, is costly. Y1 - 2020 ER - TY - JOUR A1 - Mathieu, Besançon A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - FrankWolfe.jl: a high-performance and flexible toolbox for Frank-Wolfe algorithms and Conditional Gradients JF - INFORMS Journal on Computing N2 - We present FrankWolfe.jl, an open-source implementation of several popular Frank–Wolfe and conditional gradients variants for first-order constrained optimization. The package is designed with flexibility and high performance in mind, allowing for easy extension and relying on few assumptions regarding the user-provided functions. It supports Julia’s unique multiple dispatch feature, and it interfaces smoothly with generic linear optimization formulations using MathOptInterface.jl. Y1 - 2022 U6 - https://doi.org/10.1287/ijoc.2022.1191 VL - 34 IS - 5 SP - 2383 EP - 2865 ER - TY - CHAP A1 - Criado, Francisco A1 - Martínez-Rubio, David A1 - Pokutta, Sebastian T1 - Fast Algorithms for Packing Proportional Fairness and its Dual T2 - Proceedings of the Conference on Neural Information Processing Systems N2 - The proportional fair resource allocation problem is a major problem studied in flow control of networks, operations research, and economic theory, where it has found numerous applications. This problem, defined as the constrained maximization of sum_i log x_i, is known as the packing proportional fairness problem when the feasible set is defined by positive linear constraints and x ∈ R≥0. In this work, we present a distributed accelerated first-order method for this problem which improves upon previous approaches. We also design an algorithm for the optimization of its dual problem. Both algorithms are width-independent. Y1 - 2022 VL - 36 ER - TY - CHAP A1 - Carderera, Alejandro A1 - Pokutta, Sebastian A1 - Mathieu, Besançon T1 - Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions T2 - Thirty-fifth Conference on Neural Information Processing Systems, NeurIPS 2021 N2 - Generalized self-concordance is a key property present in the objective function of many important learning problems. We establish the convergence rate of a simple Frank-Wolfe variant that uses the open-loop step size strategy 𝛾𝑡 = 2/(𝑡 + 2), obtaining a O (1/𝑡) convergence rate for this class of functions in terms of primal gap and Frank-Wolfe gap, where 𝑡 is the iteration count. This avoids the use of second-order information or the need to estimate local smoothness parameters of previous work. We also show improved convergence rates for various common cases, e.g., when the feasible region under consideration is uniformly convex or polyhedral. Y1 - 2021 ER - TY - JOUR A1 - Kerdreux, Thomas A1 - d'Aspremont, Alexandre A1 - Pokutta, Sebastian T1 - Local and Global Uniform Convexity Conditions N2 - We review various characterizations of uniform convexity and smoothness on norm balls in finite-dimensional spaces and connect results stemming from the geometry of Banach spaces with scaling inequalities used in analysing the convergence of optimization methods. In particular, we establish local versions of these conditions to provide sharper insights on a recent body of complexity results in learning theory, online learning, or offline optimization, which rely on the strong convexity of the feasible set. While they have a significant impact on complexity, these strong convexity or uniform convexity properties of feasible sets are not exploited as thoroughly as their functional counterparts, and this work is an effort to correct this imbalance. We conclude with some practical examples in optimization and machine learning where leveraging these conditions and localized assumptions lead to new complexity results. Y1 - 2021 ER - TY - CHAP A1 - Carderera, Alejandro A1 - Diakonikolas, Jelena A1 - Lin, Cheuk Yin A1 - Pokutta, Sebastian T1 - Parameter-free Locally Accelerated Conditional Gradients T2 - ICML 2021 N2 - Projection-free conditional gradient (CG) methods are the algorithms of choice for constrained optimization setups in which projections are often computationally prohibitive but linear optimization over the constraint set remains computationally feasible. Unlike in projection-based methods, globally accelerated convergence rates are in general unattainable for CG. However, a very recent work on Locally accelerated CG (LaCG) has demonstrated that local acceleration for CG is possible for many settings of interest. The main downside of LaCG is that it requires knowledge of the smoothness and strong convexity parameters of the objective function. We remove this limitation by introducing a novel, Parameter-Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees. Our theoretical results are complemented by numerical experiments, which demonstrate local acceleration and showcase the practical improvements of PF-LaCG over non-accelerated algorithms, both in terms of iteration count and wall-clock time. Y1 - 2021 ER - TY - JOUR A1 - Roux, Christophe A1 - Pokutta, Sebastian A1 - Wirth, Elias A1 - Kerdreux, Thomas T1 - Efficient Online-Bandit Strategies for Minimax Learning Problems N2 - Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning [Namkoong and Duchi, 2016, Curi et al., 2020] or learning with non-standard aggregated losses [Shalev- Shwartz and Wexler, 2016, Fan et al., 2017]. More specifically, these problems are convex-linear problems where the minimization is carried out over the model parameters w ∈ W and the maximization over the empirical distribution p ∈ K of the training set indexes, where K is the simplex or a subset of it. To design efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm. We argue that the efficiency of such approaches critically depends on the structure of K and propose two properties of K that facilitate designing efficient algorithms. We focus on a specific family of sets Sn,k encompassing various learning applications and provide high-probability convergence guarantees to the minimax values. Y1 - 2021 ER - TY - JOUR A1 - Kerdreux, Thomas A1 - Roux, Christophe A1 - d'Aspremont, Alexandre A1 - Pokutta, Sebastian T1 - Linear Bandits on Uniformly Convex Sets JF - Journal of Machine Learning Research N2 - Linear bandit algorithms yield O~(n√T) pseudo-regret bounds on compact convex action sets K⊂Rn and two types of structural assumptions lead to better pseudo-regret bounds. When K is the simplex or an ℓp ball with p∈]1,2], there exist bandits algorithms with O~(√n√T) pseudo-regret bounds. Here, we derive bandit algorithms for some strongly convex sets beyond ℓp balls that enjoy pseudo-regret bounds of O~(√n√T), which answers an open question from [BCB12, §5.5.]. Interestingly, when the action set is uniformly convex but not necessarily strongly convex, we obtain pseudo-regret bounds with a dimension dependency smaller than O(√n). However, this comes at the expense of asymptotic rates in T varying between O(√T) and O(T). Y1 - 2021 UR - https://www.jmlr.org/papers/v22/21-0277.html VL - 22 IS - 284 SP - 1 EP - 23 ER - TY - CHAP A1 - Wirth, Elias A1 - Pokutta, Sebastian T1 - Conditional Gradients for the Approximately Vanishing Ideal T2 - Proceedings of The 25th International Conference on Artificial Intelligence and Statistics N2 - The vanishing ideal of a set of points X is the set of polynomials that evaluate to 0 over all points x in X and admits an efficient representation by a finite set of polynomials called generators. To accommodate the noise in the data set, we introduce the Conditional Gradients Approximately Vanishing Ideal algorithm (CGAVI) for the construction of the set of generators of the approximately vanishing ideal. The constructed set of generators captures polynomial structures in data and gives rise to a feature map that can, for example, be used in combination with a linear classifier for supervised learning. In CGAVI, we construct the set of generators by solving specific instances of (constrained) convex optimization problems with the Pairwise Frank-Wolfe algorithm (PFW). Among other things, the constructed generators inherit the LASSO generalization bound and not only vanish on the training but also on out-sample data. Moreover, CGAVI admits a compact representation of the approximately vanishing ideal by constructing few generators with sparse coefficient vectors. Y1 - 2022 UR - https://proceedings.mlr.press/v151/wirth22a.html VL - 151 SP - 2191 EP - 2209 ER - TY - CHAP A1 - Martínez-Rubio, David A1 - Pokutta, Sebastian T1 - Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties T2 - Proceedings of Thirty Sixth Conference on Learning Theory N2 - We propose a globally-accelerated, first-order method for the optimization of smooth and (strongly or not) geodesically-convex functions in a wide class of Hadamard manifolds. We achieve the same convergence rates as Nesterov’s accelerated gradient descent, up to a multiplicative geometric penalty and log factors. Crucially, we can enforce our method to stay within a compact set we define. Prior fully accelerated works \emph{resort to assuming} that the iterates of their algorithms stay in some pre-specified compact set, except for two previous methods of limited applicability. For our manifolds, this solves the open question in (Kim and Yang, 2022) about obtaining global general acceleration without iterates assumptively staying in the feasible set.In our solution, we design an accelerated Riemannian inexact proximal point algorithm, which is a result that was unknown even with exact access to the proximal operator, and is of independent interest. For smooth functions, we show we can implement the prox step inexactly with first-order methods in Riemannian balls of certain diameter that is enough for global accelerated optimization. Y1 - 2023 UR - https://proceedings.mlr.press/v195/martinez-rubio23a.html VL - 195 SP - 359 EP - 393 ER -