TY - JOUR A1 - Anari, N. A1 - Haghtalab, N. A1 - Naor, S. A1 - Pokutta, Sebastian A1 - Singh, M. A1 - Torrico, A. T1 - Structured Robust Submodular Maximization: Offline and Online Algorithms JF - INFORMS Journal on Computing Y1 - 2020 ER - TY - JOUR A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - Second-order Conditional Gradient Sliding N2 - Constrained second-order convex optimization algorithms are the method of choice when a high accuracy solution to a problem is needed, due to their local quadratic convergence. These algorithms require the solution of a constrained quadratic subproblem at every iteration. We present the \emph{Second-Order Conditional Gradient Sliding} (SOCGS) algorithm, which uses a projection-free algorithm to solve the constrained quadratic subproblems inexactly. When the feasible region is a polytope the algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations. Once in the quadratic regime the SOCGS algorithm requires O(log(log1/ε)) first-order and Hessian oracle calls and O(log(1/ε)log(log1/ε)) linear minimization oracle calls to achieve an ε-optimal solution. This algorithm is useful when the feasible region can only be accessed efficiently through a linear optimization oracle, and computing first-order information of the function, although possible, is costly. Y1 - 2020 ER - TY - JOUR A1 - Cryille W., Combettes A1 - Spiegel, Christoph A1 - Pokutta, Sebastian T1 - Projection-Free Adaptive Gradients for Large-Scale Optimization N2 - The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set. In this respect, stochastic Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens, by querying only approximate first-order information from the objective and by maintaining feasibility of the iterates without using projections. In this paper, we improve the quality of their first-order information by blending in adaptive gradients. We derive convergence rates and demonstrate the computational advantage of our method over the state-of-the-art stochastic Frank-Wolfe algorithms on both convex and nonconvex objectives. The experiments further show that our method can improve the performance of adaptive gradient algorithms for constrained optimization. Y1 - 2020 ER - TY - JOUR A1 - Faenza, Yuri A1 - Muñoz, Gonzalo A1 - Pokutta, Sebastian T1 - New Limits of Treewidth-based tractability in Optimization JF - Mathematical Programming Y1 - 2020 U6 - https://doi.org/10.1007/s10107-020-01563-5 N1 - URL of the PDF: http://link.springer.com/article/10.1007/s10107-020-01563-5 N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2018/09/22/treewidth-abstract.html VL - 191 SP - 559 EP - 594 ER - TY - JOUR A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Zimmer, Max T1 - Deep Neural Network Training with Frank-Wolfe N2 - This paper studies the empirical efficacy and benefits of using projection-free first-order methods in the form of Conditional Gradients, a.k.a. Frank-Wolfe methods, for training Neural Networks with constrained parameters. We draw comparisons both to current state-of-the-art stochastic Gradient Descent methods as well as across different variants of stochastic Conditional Gradients. In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants. We then show that, by choosing an appropriate region, one can achieve performance exceeding that of unconstrained stochastic Gradient Descent and matching state-of-the-art results relying on L2-regularization. Lastly, we also demonstrate that, besides impacting performance, the particular choice of constraints can have a drastic impact on the learned representations. Y1 - 2020 ER -