TY - CHAP A1 - Diakonikolas, Jelena A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - Breaking the Curse of Dimensionality (Locally) to Accelerate Conditional Gradients T2 - OPTML Workshop Paper Y1 - 2019 N1 - URL of the Code: https://colab.research.google.com/drive/1ejjfCan7xnEhWWJXCIzb03CwQRG9iW_O N1 - URL of the PDF: https://opt-ml.org/papers/2019/paper_26.pdf N1 - URL of the Poster: https://app.box.com/s/d7p038u7df422q4jsccbmj15mngqv2ws N1 - URL of the Slides: https://app.box.com/s/gphkhapso7d1vrfnzqykkb3vx0agxh8w N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2019/07/04/LaCG-abstract.html ER - TY - CHAP A1 - Diakonikolas, Jelena A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - Locally Accelerated Conditional Gradients T2 - Proceedings of AISTATS Y1 - 2020 N1 - URL of the Code: https://colab.research.google.com/drive/1ejjfCan7xnEhWWJXCIzb03CwQRG9iW_O N1 - URL of the PDF: http://proceedings.mlr.press/v108/diakonikolas20a/diakonikolas20a.pdf N1 - URL of the Slides: https://app.box.com/s/gphkhapso7d1vrfnzqykkb3vx0agxh8w N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2019/07/04/LaCG-abstract.html N1 - https://slideslive.com/38930107/locally-accelerated-conditional-gradients?ref=account-folder-52123-folders ER - TY - CHAP A1 - Carderera, Alejandro A1 - Diakonikolas, Jelena A1 - Lin, Cheuk Yin A1 - Pokutta, Sebastian T1 - Parameter-free Locally Accelerated Conditional Gradients T2 - ICML 2021 N2 - Projection-free conditional gradient (CG) methods are the algorithms of choice for constrained optimization setups in which projections are often computationally prohibitive but linear optimization over the constraint set remains computationally feasible. Unlike in projection-based methods, globally accelerated convergence rates are in general unattainable for CG. However, a very recent work on Locally accelerated CG (LaCG) has demonstrated that local acceleration for CG is possible for many settings of interest. The main downside of LaCG is that it requires knowledge of the smoothness and strong convexity parameters of the objective function. We remove this limitation by introducing a novel, Parameter-Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees. Our theoretical results are complemented by numerical experiments, which demonstrate local acceleration and showcase the practical improvements of PF-LaCG over non-accelerated algorithms, both in terms of iteration count and wall-clock time. Y1 - 2021 ER - TY - JOUR A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - Second-order Conditional Gradient Sliding N2 - Constrained second-order convex optimization algorithms are the method of choice when a high accuracy solution to a problem is needed, due to their local quadratic convergence. These algorithms require the solution of a constrained quadratic subproblem at every iteration. We present the \emph{Second-Order Conditional Gradient Sliding} (SOCGS) algorithm, which uses a projection-free algorithm to solve the constrained quadratic subproblems inexactly. When the feasible region is a polytope the algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations. Once in the quadratic regime the SOCGS algorithm requires O(log(log1/ε)) first-order and Hessian oracle calls and O(log(1/ε)log(log1/ε)) linear minimization oracle calls to achieve an ε-optimal solution. This algorithm is useful when the feasible region can only be accessed efficiently through a linear optimization oracle, and computing first-order information of the function, although possible, is costly. Y1 - 2020 ER - TY - JOUR A1 - Carderera, Alejandro A1 - Pokutta, Sebastian A1 - Schütte, Christof A1 - Weiser, Martin T1 - CINDy: Conditional gradient-based Identification of Non-linear Dynamics – Noise-robust recovery JF - Journal of Computational and Applied Mathematics N2 - Governing equations are essential to the study of nonlinear dynamics, often enabling the prediction of previously unseen behaviors as well as the inclusion into control strategies. The discovery of governing equations from data thus has the potential to transform data-rich fields where well-established dynamical models remain unknown. This work contributes to the recent trend in data-driven sparse identification of nonlinear dynamics of finding the best sparse fit to observational data in a large library of potential nonlinear models. We propose an efficient first-order Conditional Gradient algorithm for solving the underlying optimization problem. In comparison to the most prominent alternative algorithms, the new algorithm shows significantly improved performance on several essential issues like sparsity-induction, structure-preservation, noise robustness, and sample efficiency. We demonstrate these advantages on several dynamics from the field of synchronization, particle dynamics, and enzyme chemistry. Y1 - 2021 ER - TY - CHAP A1 - Carderera, Alejandro A1 - Pokutta, Sebastian A1 - Mathieu, Besançon T1 - Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions T2 - Thirty-fifth Conference on Neural Information Processing Systems, NeurIPS 2021 N2 - Generalized self-concordance is a key property present in the objective function of many important learning problems. We establish the convergence rate of a simple Frank-Wolfe variant that uses the open-loop step size strategy 𝛾𝑡 = 2/(𝑡 + 2), obtaining a O (1/𝑡) convergence rate for this class of functions in terms of primal gap and Frank-Wolfe gap, where 𝑡 is the iteration count. This avoids the use of second-order information or the need to estimate local smoothness parameters of previous work. We also show improved convergence rates for various common cases, e.g., when the feasible region under consideration is uniformly convex or polyhedral. Y1 - 2021 ER - TY - JOUR A1 - Mathieu, Besançon A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - FrankWolfe.jl: a high-performance and flexible toolbox for Frank-Wolfe algorithms and Conditional Gradients JF - INFORMS Journal on Computing N2 - We present FrankWolfe.jl, an open-source implementation of several popular Frank–Wolfe and conditional gradients variants for first-order constrained optimization. The package is designed with flexibility and high performance in mind, allowing for easy extension and relying on few assumptions regarding the user-provided functions. It supports Julia’s unique multiple dispatch feature, and it interfaces smoothly with generic linear optimization formulations using MathOptInterface.jl. Y1 - 2022 U6 - https://doi.org/10.1287/ijoc.2022.1191 VL - 34 IS - 5 SP - 2383 EP - 2865 ER -