TY - CHAP A1 - Sofranac, Boro A1 - Gleixner, Ambros A1 - Pokutta, Sebastian T1 - Accelerating Domain Propagation: an Efficient GPU-Parallel Algorithm over Sparse Matrices T2 - 2020 IEEE/ACM 10th Workshop on Irregular Applications: Architectures and Algorithms (IA3) N2 - Fast domain propagation of linear constraints has become a crucial component of today's best algorithms and solvers for mixed integer programming and pseudo-boolean optimization to achieve peak solving performance. Irregularities in the form of dynamic algorithmic behaviour, dependency structures, and sparsity patterns in the input data make efficient implementations of domain propagation on GPUs and, more generally, on parallel architectures challenging. This is one of the main reasons why domain propagation in state-of-the-art solvers is single thread only. In this paper, we present a new algorithm for domain propagation which (a) avoids these problems and allows for an efficient implementation on GPUs, and is (b) capable of running propagation rounds entirely on the GPU, without any need for synchronization or communication with the CPU. We present extensive computational results which demonstrate the effectiveness of our approach and show that ample speedups are possible on practically relevant problems: on state-of-the-art GPUs, our geometric mean speed-up for reasonably-large instances is around 10x to 20x and can be as high as 195x on favorably-large instances. Y1 - 2020 U6 - https://doi.org/10.1109/IA351965.2020.00007 N1 - URL of the Slides: https://app.box.com/s/qy0pjmhtbm7shk2ypxjxlh2sj4nudvyu N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2020/09/20/gpu-prob.html SP - 1 EP - 11 ER - TY - JOUR A1 - Šofranac, Boro A1 - Gleixner, Ambros A1 - Pokutta, Sebastian T1 - Accelerating domain propagation: An efficient GPU-parallel algorithm over sparse matrices JF - Parallel Computing N2 - • Currently, domain propagation in state-of-the-art MIP solvers is single thread only. • The paper presents a novel, efficient GPU algorithm to perform domain propagation. • Challenges are dynamic algorithmic behavior, dependency structures, sparsity patterns. • The algorithm is capable of running entirely on the GPU with no CPU involvement. • We achieve speed-ups of around 10x to 20x, up to 180x on favorably-large instances. Y1 - 2022 U6 - https://doi.org/10.1016/j.parco.2021.102874 VL - 109 SP - 102874 ER - TY - CHAP A1 - Diakonikolas, Jelena A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - Breaking the Curse of Dimensionality (Locally) to Accelerate Conditional Gradients T2 - OPTML Workshop Paper Y1 - 2019 N1 - URL of the Code: https://colab.research.google.com/drive/1ejjfCan7xnEhWWJXCIzb03CwQRG9iW_O N1 - URL of the PDF: https://opt-ml.org/papers/2019/paper_26.pdf N1 - URL of the Poster: https://app.box.com/s/d7p038u7df422q4jsccbmj15mngqv2ws N1 - URL of the Slides: https://app.box.com/s/gphkhapso7d1vrfnzqykkb3vx0agxh8w N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2019/07/04/LaCG-abstract.html ER - TY - CHAP A1 - Combettes, Cyrille W. A1 - Pokutta, Sebastian T1 - Blended Matching Pursuit T2 - Proceedings of NeurIPS Y1 - 2019 N1 - URL of the Code: https://colab.research.google.com/drive/17XYIxnCcJjKswba9mAaXFWnNGVZdsaXQ N1 - URL of the PDF: https://papers.nips.cc/paper/8478-blended-matching-pursuit N1 - URL of the Poster: https://app.box.com/s/j6lbh49udjd45krhxcypufxwdofw6br8 N1 - URL of the Slides: https://app.box.com/s/8lfktq6h3dqp9t2gqydu2tp8h2uxgz7m N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2019/05/27/bmp-abstract.html ER - TY - CHAP A1 - Pokutta, Sebastian A1 - Singh, M. A1 - Torrico, A. T1 - On the Unreasonable Effectiveness of the Greedy Algorithm: Greedy Adapts to Sharpness T2 - OPTML Workshop Paper Y1 - 2019 N1 - URL of the PDF: https://opt-ml.org/papers/2019/paper_16.pdf N1 - URL of the Poster: https://app.box.com/s/24vbh1s2vib11upqyepzen3lzdl13skr ER - TY - CHAP A1 - Criado, Francisco A1 - Martínez-Rubio, David A1 - Pokutta, Sebastian T1 - Fast Algorithms for Packing Proportional Fairness and its Dual T2 - Proceedings of the Conference on Neural Information Processing Systems N2 - The proportional fair resource allocation problem is a major problem studied in flow control of networks, operations research, and economic theory, where it has found numerous applications. This problem, defined as the constrained maximization of sum_i log x_i, is known as the packing proportional fairness problem when the feasible set is defined by positive linear constraints and x ∈ R≥0. In this work, we present a distributed accelerated first-order method for this problem which improves upon previous approaches. We also design an algorithm for the optimization of its dual problem. Both algorithms are width-independent. Y1 - 2022 VL - 36 ER - TY - CHAP A1 - Carderera, Alejandro A1 - Pokutta, Sebastian A1 - Mathieu, Besançon T1 - Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions T2 - Thirty-fifth Conference on Neural Information Processing Systems, NeurIPS 2021 N2 - Generalized self-concordance is a key property present in the objective function of many important learning problems. We establish the convergence rate of a simple Frank-Wolfe variant that uses the open-loop step size strategy 𝛾𝑡 = 2/(𝑡 + 2), obtaining a O (1/𝑡) convergence rate for this class of functions in terms of primal gap and Frank-Wolfe gap, where 𝑡 is the iteration count. This avoids the use of second-order information or the need to estimate local smoothness parameters of previous work. We also show improved convergence rates for various common cases, e.g., when the feasible region under consideration is uniformly convex or polyhedral. Y1 - 2021 ER - TY - JOUR A1 - Kerdreux, Thomas A1 - d'Aspremont, Alexandre A1 - Pokutta, Sebastian T1 - Local and Global Uniform Convexity Conditions N2 - We review various characterizations of uniform convexity and smoothness on norm balls in finite-dimensional spaces and connect results stemming from the geometry of Banach spaces with scaling inequalities used in analysing the convergence of optimization methods. In particular, we establish local versions of these conditions to provide sharper insights on a recent body of complexity results in learning theory, online learning, or offline optimization, which rely on the strong convexity of the feasible set. While they have a significant impact on complexity, these strong convexity or uniform convexity properties of feasible sets are not exploited as thoroughly as their functional counterparts, and this work is an effort to correct this imbalance. We conclude with some practical examples in optimization and machine learning where leveraging these conditions and localized assumptions lead to new complexity results. Y1 - 2021 ER - TY - CHAP A1 - Zimmer, Max A1 - Spiegel, Christoph A1 - Pokutta, Sebastian T1 - How I Learned to Stop Worrying and Love Retraining T2 - Proceedings of International Conference on Learning Representations Y1 - 2023 ER - TY - CHAP A1 - Wirth, Elias A1 - Kerdreux, Thomas A1 - Pokutta, Sebastian T1 - Acceleration of Frank-Wolfe Algorithms with Open Loop Step-sizes T2 - Proceedings of International Conference on Artificial Intelligence and Statistics Y1 - 2023 ER - TY - CHAP A1 - Wirth, Elias A1 - Kera, A1 - Pokutta, Sebastian T1 - Approximate Vanishing Ideal Computations at Scale T2 - Proceedings of International Conference on Learning Representations Y1 - 2023 ER -