TY - JOUR A1 - Bärmann, Andreas A1 - Martin, Alexander A1 - Pokutta, Sebastian A1 - Schneider, Oskar T1 - An Online-Learning Approach to Inverse Optimization Y1 - 2018 ER - TY - CHAP A1 - Carderera, Alejandro A1 - Diakonikolas, Jelena A1 - Lin, Cheuk Yin A1 - Pokutta, Sebastian T1 - Parameter-free Locally Accelerated Conditional Gradients T2 - ICML 2021 N2 - Projection-free conditional gradient (CG) methods are the algorithms of choice for constrained optimization setups in which projections are often computationally prohibitive but linear optimization over the constraint set remains computationally feasible. Unlike in projection-based methods, globally accelerated convergence rates are in general unattainable for CG. However, a very recent work on Locally accelerated CG (LaCG) has demonstrated that local acceleration for CG is possible for many settings of interest. The main downside of LaCG is that it requires knowledge of the smoothness and strong convexity parameters of the objective function. We remove this limitation by introducing a novel, Parameter-Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees. Our theoretical results are complemented by numerical experiments, which demonstrate local acceleration and showcase the practical improvements of PF-LaCG over non-accelerated algorithms, both in terms of iteration count and wall-clock time. Y1 - 2021 ER - TY - JOUR A1 - Bienenstock, Daniel A1 - Muñoz, Gonzalo A1 - Pokutta, Sebastian T1 - Principled Deep Neural Network Training through Linear Programming N2 - Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks. Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently results for the complexity of training deep neural networks have been obtained. In this work we show that large classes of deep neural networks with various architectures (e.g., DNNs, CNNs, Binary Neural Networks, and ResNets), activation functions (e.g., ReLUs and leaky ReLUs), and loss functions (e.g., Hinge loss, Euclidean loss, etc) can be trained to near optimality with desired target accuracy using linear programming in time that is exponential in the input data and parameter space dimension and polynomial in the size of the data set; improvements of the dependence in the input dimension are known to be unlikely assuming P≠NP, and improving the dependence on the parameter space dimension remains open. In particular, we obtain polynomial time algorithms for training for a given fixed network architecture. Our work applies more broadly to empirical risk minimization problems which allows us to generalize various previous results and obtain new complexity results for previously unstudied architectures in the proper learning setting. Y1 - 2018 ER - TY - CHAP A1 - Kerdreux, Thomas A1 - d'Aspremont, Alexandre A1 - Pokutta, Sebastian T1 - Projection-Free Optimization on Uniformly Convex Sets T2 - To Appear in Proceedings of AISTATS Y1 - 2020 ER - TY - JOUR A1 - Roux, Christophe A1 - Pokutta, Sebastian A1 - Wirth, Elias A1 - Kerdreux, Thomas T1 - Efficient Online-Bandit Strategies for Minimax Learning Problems N2 - Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning [Namkoong and Duchi, 2016, Curi et al., 2020] or learning with non-standard aggregated losses [Shalev- Shwartz and Wexler, 2016, Fan et al., 2017]. More specifically, these problems are convex-linear problems where the minimization is carried out over the model parameters w ∈ W and the maximization over the empirical distribution p ∈ K of the training set indexes, where K is the simplex or a subset of it. To design efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm. We argue that the efficiency of such approaches critically depends on the structure of K and propose two properties of K that facilitate designing efficient algorithms. We focus on a specific family of sets Sn,k encompassing various learning applications and provide high-probability convergence guarantees to the minimax values. Y1 - 2021 ER - TY - JOUR A1 - Kerdreux, Thomas A1 - Roux, Christophe A1 - d'Aspremont, Alexandre A1 - Pokutta, Sebastian T1 - Linear Bandits on Uniformly Convex Sets JF - Journal of Machine Learning Research N2 - Linear bandit algorithms yield O~(n√T) pseudo-regret bounds on compact convex action sets K⊂Rn and two types of structural assumptions lead to better pseudo-regret bounds. When K is the simplex or an ℓp ball with p∈]1,2], there exist bandits algorithms with O~(√n√T) pseudo-regret bounds. Here, we derive bandit algorithms for some strongly convex sets beyond ℓp balls that enjoy pseudo-regret bounds of O~(√n√T), which answers an open question from [BCB12, §5.5.]. Interestingly, when the action set is uniformly convex but not necessarily strongly convex, we obtain pseudo-regret bounds with a dimension dependency smaller than O(√n). However, this comes at the expense of asymptotic rates in T varying between O(√T) and O(T). Y1 - 2021 UR - https://www.jmlr.org/papers/v22/21-0277.html VL - 22 IS - 284 SP - 1 EP - 23 ER - TY - JOUR A1 - Carderera, Alejandro A1 - Pokutta, Sebastian A1 - Schütte, Christof A1 - Weiser, Martin T1 - An efficient first-order conditional gradient algorithm in data-driven sparse identification of nonlinear dynamics to solve sparse recovery problems under noise JF - Journal of Computational and Applied Mathematics N2 - Governing equations are essential to the study of nonlinear dynamics, often enabling the prediction of previously unseen behaviors as well as the inclusion into control strategies. The discovery of governing equations from data thus has the potential to transform data-rich fields where well-established dynamical models remain unknown. This work contributes to the recent trend in data-driven sparse identification of nonlinear dynamics of finding the best sparse fit to observational data in a large library of potential nonlinear models. We propose an efficient first-order Conditional Gradient algorithm for solving the underlying optimization problem. In comparison to the most prominent alternative algorithms, the new algorithm shows significantly improved performance on several essential issues like sparsity-induction, structure-preservation, noise robustness, and sample efficiency. We demonstrate these advantages on several dynamics from the field of synchronization, particle dynamics, and enzyme chemistry. Y1 - 2021 ER - TY - JOUR A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Zimmer, Max T1 - Deep Neural Network Training with Frank-Wolfe N2 - This paper studies the empirical efficacy and benefits of using projection-free first-order methods in the form of Conditional Gradients, a.k.a. Frank-Wolfe methods, for training Neural Networks with constrained parameters. We draw comparisons both to current state-of-the-art stochastic Gradient Descent methods as well as across different variants of stochastic Conditional Gradients. In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants. We then show that, by choosing an appropriate region, one can achieve performance exceeding that of unconstrained stochastic Gradient Descent and matching state-of-the-art results relying on L2-regularization. Lastly, we also demonstrate that, besides impacting performance, the particular choice of constraints can have a drastic impact on the learned representations. Y1 - 2020 ER - TY - JOUR A1 - Cryille W., Combettes A1 - Spiegel, Christoph A1 - Pokutta, Sebastian T1 - Projection-Free Adaptive Gradients for Large-Scale Optimization N2 - The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set. In this respect, stochastic Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens, by querying only approximate first-order information from the objective and by maintaining feasibility of the iterates without using projections. In this paper, we improve the quality of their first-order information by blending in adaptive gradients. We derive convergence rates and demonstrate the computational advantage of our method over the state-of-the-art stochastic Frank-Wolfe algorithms on both convex and nonconvex objectives. The experiments further show that our method can improve the performance of adaptive gradient algorithms for constrained optimization. Y1 - 2020 ER - TY - CHAP A1 - Pfetsch, Marc A1 - Pokutta, Sebastian T1 - IPBoost – Non-Convex Boosting via Integer Programming T2 - Proceedings of ICML Y1 - 2020 N1 - URL of the Code: https://www2.mathematik.tu-darmstadt.de/~pfetsch/ipboost.html N1 - URL of the Slides: https://app.box.com/s/8dpvmls88suouy11bkpwufhu7iiz6dxl N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2020/02/13/ipboost-abstract.html ER -