TY - JOUR A1 - Šofranac, Boro A1 - Gleixner, Ambros A1 - Pokutta, Sebastian T1 - An Algorithm-independent Measure of Progress for Linear Constraint Propagation JF - Constraints Y1 - 2022 U6 - https://doi.org/10.1007/s10601-022-09338-9 VL - 27 SP - 432 EP - 455 ER - TY - CHAP A1 - Chmiela, Antonia A1 - Khalil, Elias B. A1 - Gleixner, Ambros A1 - Lodi, Andrea A1 - Pokutta, Sebastian T1 - Learning to Schedule Heuristics in Branch and Bound T2 - Thirty-fifth Conference on Neural Information Processing Systems, NeurIPS 2021 N2 - Primal heuristics play a crucial role in exact solvers for Mixed Integer Programming (MIP). While solvers are guaranteed to find optimal solutions given sufficient time, real-world applications typically require finding good solutions early on in the search to enable fast decision-making. While much of MIP research focuses on designing effective heuristics, the question of how to manage multiple MIP heuristics in a solver has not received equal attention. Generally, solvers follow hard-coded rules derived from empirical testing on broad sets of instances. Since the performance of heuristics is instance-dependent, using these general rules for a particular problem might not yield the best performance. In this work, we propose the first data-driven framework for scheduling heuristics in an exact MIP solver. By learning from data describing the performance of primal heuristics, we obtain a problem-specific schedule of heuristics that collectively find many solutions at minimal cost. We provide a formal description of the problem and propose an efficient algorithm for computing such a schedule. Compared to the default settings of a state-of-the-art academic MIP solver, we are able to reduce the average primal integral by up to 49% on a class of challenging instances. Y1 - 2021 ER - TY - JOUR A1 - Šofranac, Boro A1 - Gleixner, Ambros A1 - Pokutta, Sebastian T1 - Accelerating domain propagation: An efficient GPU-parallel algorithm over sparse matrices JF - Parallel Computing N2 - • Currently, domain propagation in state-of-the-art MIP solvers is single thread only. • The paper presents a novel, efficient GPU algorithm to perform domain propagation. • Challenges are dynamic algorithmic behavior, dependency structures, sparsity patterns. • The algorithm is capable of running entirely on the GPU with no CPU involvement. • We achieve speed-ups of around 10x to 20x, up to 180x on favorably-large instances. Y1 - 2022 U6 - https://doi.org/10.1016/j.parco.2021.102874 VL - 109 SP - 102874 ER - TY - CHAP A1 - Sofranac, Boro A1 - Gleixner, Ambros A1 - Pokutta, Sebastian T1 - Accelerating Domain Propagation: an Efficient GPU-Parallel Algorithm over Sparse Matrices T2 - 2020 IEEE/ACM 10th Workshop on Irregular Applications: Architectures and Algorithms (IA3) N2 - Fast domain propagation of linear constraints has become a crucial component of today's best algorithms and solvers for mixed integer programming and pseudo-boolean optimization to achieve peak solving performance. Irregularities in the form of dynamic algorithmic behaviour, dependency structures, and sparsity patterns in the input data make efficient implementations of domain propagation on GPUs and, more generally, on parallel architectures challenging. This is one of the main reasons why domain propagation in state-of-the-art solvers is single thread only. In this paper, we present a new algorithm for domain propagation which (a) avoids these problems and allows for an efficient implementation on GPUs, and is (b) capable of running propagation rounds entirely on the GPU, without any need for synchronization or communication with the CPU. We present extensive computational results which demonstrate the effectiveness of our approach and show that ample speedups are possible on practically relevant problems: on state-of-the-art GPUs, our geometric mean speed-up for reasonably-large instances is around 10x to 20x and can be as high as 195x on favorably-large instances. Y1 - 2020 U6 - https://doi.org/10.1109/IA351965.2020.00007 N1 - URL of the Slides: https://app.box.com/s/qy0pjmhtbm7shk2ypxjxlh2sj4nudvyu N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2020/09/20/gpu-prob.html SP - 1 EP - 11 ER - TY - CHAP A1 - Martínez-Rubio, David A1 - Wirth, Elias A1 - Pokutta, Sebastian T1 - Accelerated and Sparse Algorithms for Approximate Personalized PageRank and Beyond T2 - Proceedings of Machine Learning Research N2 - It has recently been shown that ISTA, an unaccelerated optimization method, presents sparse updates for the ℓ1-regularized undirected personalized PageRank problem (Fountoulakis et al., 2019), leading to cheap iteration complexity and providing the same guarantees as the approximate personalized PageRank algorithm (APPR) (Andersen et al., 2006). In this work, we design an accelerated optimization algorithm for this problem that also performs sparse updates, providing an affirmative answer to the COLT 2022 open question of Fountoulakis and Yang (2022). Acceleration provides a reduced dependence on the condition number, while the dependence on the sparsity in our updates differs from the ISTA approach. Further, we design another algorithm by using conjugate directions to achieve an exact solution while exploiting sparsity. Both algorithms lead to faster convergence for certain parameter regimes. Our findings apply beyond PageRank and work for any quadratic objective whose Hessian is a positive-definite 푀-matrix. Y1 - 2023 UR - https://proceedings.mlr.press/v195/martinez-rubio23a/martinez-rubio23a.pdf VL - 195 SP - 1 EP - 35 ER - TY - CHAP A1 - Parczyk, Olaf A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Szabó, Tibor T1 - Fully Computer-assisted Proofs in Extremal Combinatorics T2 - Proceedings of the AAAI Conference on Artificial Intelligence N2 - We present a fully computer-assisted proof system for solving a particular family of problems in Extremal Combinatorics. Existing techniques using Flag Algebras have proven powerful in the past, but have so far lacked a computational counterpart to derive matching constructive bounds. We demonstrate that common search heuristics are capable of finding constructions far beyond the reach of human intuition. Additionally, the most obvious downside of such heuristics, namely a missing guarantee of global optimality, can often be fully eliminated in this case through lower bounds and stability results coming from the Flag Algebra approach. To illustrate the potential of this approach, we study two related and well-known problems in Extremal Graph Theory that go back to questions of Erdős from the 60s. Most notably, we present the first major improvement in the upper bound of the Ramsey multiplicity of the complete graph on 4 vertices in 25 years, precisely determine the first off-diagonal Ramsey multiplicity number, and settle the minimum number of independent sets of size four in graphs with clique number strictly less than five. Y1 - 2023 U6 - https://doi.org/10.1609/aaai.v37i10.26470 VL - 37 IS - 10 SP - 12482 EP - 12490 ER - TY - JOUR A1 - Faenza, Yuri A1 - Muñoz, Gonzalo A1 - Pokutta, Sebastian T1 - New Limits of Treewidth-based tractability in Optimization JF - Mathematical Programming Y1 - 2020 U6 - https://doi.org/10.1007/s10107-020-01563-5 N1 - URL of the PDF: http://link.springer.com/article/10.1007/s10107-020-01563-5 N1 - URL of the Abstract: http://www.pokutta.com/blog/research/2018/09/22/treewidth-abstract.html VL - 191 SP - 559 EP - 594 ER - TY - CHAP A1 - Wirth, Elias A1 - Pokutta, Sebastian T1 - Conditional Gradients for the Approximately Vanishing Ideal T2 - Proceedings of The 25th International Conference on Artificial Intelligence and Statistics N2 - The vanishing ideal of a set of points X is the set of polynomials that evaluate to 0 over all points x in X and admits an efficient representation by a finite set of polynomials called generators. To accommodate the noise in the data set, we introduce the Conditional Gradients Approximately Vanishing Ideal algorithm (CGAVI) for the construction of the set of generators of the approximately vanishing ideal. The constructed set of generators captures polynomial structures in data and gives rise to a feature map that can, for example, be used in combination with a linear classifier for supervised learning. In CGAVI, we construct the set of generators by solving specific instances of (constrained) convex optimization problems with the Pairwise Frank-Wolfe algorithm (PFW). Among other things, the constructed generators inherit the LASSO generalization bound and not only vanish on the training but also on out-sample data. Moreover, CGAVI admits a compact representation of the approximately vanishing ideal by constructing few generators with sparse coefficient vectors. Y1 - 2022 UR - https://proceedings.mlr.press/v151/wirth22a.html VL - 151 SP - 2191 EP - 2209 ER - TY - JOUR A1 - Kerdreux, Thomas A1 - Roux, Christophe A1 - d'Aspremont, Alexandre A1 - Pokutta, Sebastian T1 - Linear Bandits on Uniformly Convex Sets JF - Journal of Machine Learning Research N2 - Linear bandit algorithms yield O~(n√T) pseudo-regret bounds on compact convex action sets K⊂Rn and two types of structural assumptions lead to better pseudo-regret bounds. When K is the simplex or an ℓp ball with p∈]1,2], there exist bandits algorithms with O~(√n√T) pseudo-regret bounds. Here, we derive bandit algorithms for some strongly convex sets beyond ℓp balls that enjoy pseudo-regret bounds of O~(√n√T), which answers an open question from [BCB12, §5.5.]. Interestingly, when the action set is uniformly convex but not necessarily strongly convex, we obtain pseudo-regret bounds with a dimension dependency smaller than O(√n). However, this comes at the expense of asymptotic rates in T varying between O(√T) and O(T). Y1 - 2021 UR - https://www.jmlr.org/papers/v22/21-0277.html VL - 22 IS - 284 SP - 1 EP - 23 ER - TY - CHAP A1 - Martínez-Rubio, David A1 - Pokutta, Sebastian T1 - Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties T2 - Proceedings of Thirty Sixth Conference on Learning Theory N2 - We propose a globally-accelerated, first-order method for the optimization of smooth and (strongly or not) geodesically-convex functions in a wide class of Hadamard manifolds. We achieve the same convergence rates as Nesterov’s accelerated gradient descent, up to a multiplicative geometric penalty and log factors. Crucially, we can enforce our method to stay within a compact set we define. Prior fully accelerated works \emph{resort to assuming} that the iterates of their algorithms stay in some pre-specified compact set, except for two previous methods of limited applicability. For our manifolds, this solves the open question in (Kim and Yang, 2022) about obtaining global general acceleration without iterates assumptively staying in the feasible set.In our solution, we design an accelerated Riemannian inexact proximal point algorithm, which is a result that was unknown even with exact access to the proximal operator, and is of independent interest. For smooth functions, we show we can implement the prox step inexactly with first-order methods in Riemannian balls of certain diameter that is enough for global accelerated optimization. Y1 - 2023 UR - https://proceedings.mlr.press/v195/martinez-rubio23a.html VL - 195 SP - 359 EP - 393 ER - TY - JOUR A1 - Gelß, Patrick A1 - Klus, Stefan A1 - Knebel, Sebastian A1 - Shakibaei, Zarin A1 - Pokutta, Sebastian T1 - Low-Rank Tensor Decompositions of Quantum Circuits JF - Journal of Computational Physics N2 - Quantum computing is arguably one of the most revolutionary and disruptive technologies of this century. Due to the ever-increasing number of potential applications as well as the continuing rise in complexity, the development, simulation, optimization, and physical realization of quantum circuits is of utmost importance for designing novel algorithms. We show how matrix product states (MPSs) and matrix product operators (MPOs) can be used to express certain quantum states, quantum gates, and entire quantum circuits as low-rank tensors. This enables the analysis and simulation of complex quantum circuits on classical computers and to gain insight into the underlying structure of the system. We present different examples to demonstrate the advantages of MPO formulations and show that they are more efficient than conventional techniques if the bond dimensions of the wave function representation can be kept small throughout the simulation. Y1 - 2022 ER - TY - CHAP A1 - Dey, Santanu Sabush A1 - Pokutta, Sebastian T1 - Design and verify: a new scheme for generating cutting-planes T2 - Proceedings of IPCO, Lecture Notes in Computer Science Y1 - 2011 UR - http://www.optimization-online.org/DB_HTML/2011/04/3002.html N1 - Additional Note: DOI: 10.1007/978-3-642-20807-2_12 VL - 6655 SP - 143 EP - 155 ER - TY - JOUR A1 - Dey, Santanu Sabush A1 - Pokutta, Sebastian T1 - Design and verify: a new scheme for generating cutting-planes JF - Mathematical Programming A Y1 - 2014 UR - http://www.optimization-online.org/DB_HTML/2011/04/3002.html N1 - Additional Note: DOI: 10.1007/978-3-642-20807-2_12 VL - 145 SP - 199 EP - 222 ER - TY - JOUR A1 - Bienstock, Daniel A1 - Muñoz, Gonzalo A1 - Pokutta, Sebastian T1 - Principled Deep Neural Network Training Through Linear Programming JF - Discrete Optimization N2 - Deep learning has received much attention lately due to the impressive empirical performance achieved by training algorithms. Consequently, a need for a better theoretical understanding of these problems has become more evident and multiple works in recent years have focused on this task. In this work, using a unified framework, we show that there exists a polyhedron that simultaneously encodes, in its facial structure, all possible deep neural network training problems that can arise from a given architecture, activation functions, loss function, and sample size. Notably, the size of the polyhedral representation depends only linearly on the sample size, and a better dependency on several other network parameters is unlikely. Using this general result, we compute the size of the polyhedral encoding for commonly used neural network architectures. Our results provide a new perspective on training problems through the lens of polyhedral theory and reveal strong structure arising from these problems. Y1 - 2023 U6 - https://doi.org/10.1016/j.disopt.2023.100795 VL - 49 ER - TY - JOUR A1 - Kevin-Martin, Aigner A1 - Bärmann, Andreas A1 - Braun, Kristin A1 - Liers, Frauke A1 - Pokutta, Sebastian A1 - Schneider, Oskar A1 - Sharma, Kartikey A1 - Tschuppik, Sebastian T1 - Data-driven Distributionally Robust Optimization over Time JF - INFORMS Journal on Optimization N2 - Stochastic optimization (SO) is a classical approach for optimization under uncertainty that typically requires knowledge about the probability distribution of uncertain parameters. Because the latter is often unknown, distributionally robust optimization (DRO) provides a strong alternative that determines the best guaranteed solution over a set of distributions (ambiguity set). In this work, we present an approach for DRO over time that uses online learning and scenario observations arriving as a data stream to learn more about the uncertainty. Our robust solutions adapt over time and reduce the cost of protection with shrinking ambiguity. For various kinds of ambiguity sets, the robust solutions converge to the SO solution. Our algorithm achieves the optimization and learning goals without solving the DRO problem exactly at any step. We also provide a regret bound for the quality of the online strategy that converges at a rate of O(log T/T−−√), where T is the number of iterations. Furthermore, we illustrate the effectiveness of our procedure by numerical experiments on mixed-integer optimization instances from popular benchmark libraries and give practical examples stemming from telecommunications and routing. Our algorithm is able to solve the DRO over time problem significantly faster than standard reformulations. Y1 - 2023 U6 - https://doi.org/10.1287/ijoo.2023.0091 VL - 5 IS - 4 SP - 376 EP - 394 ER - TY - CHAP A1 - Carderera, Alejandro A1 - Pokutta, Sebastian A1 - Mathieu, Besançon T1 - Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions T2 - Thirty-fifth Conference on Neural Information Processing Systems, NeurIPS 2021 N2 - Generalized self-concordance is a key property present in the objective function of many important learning problems. We establish the convergence rate of a simple Frank-Wolfe variant that uses the open-loop step size strategy 𝛾𝑡 = 2/(𝑡 + 2), obtaining a O (1/𝑡) convergence rate for this class of functions in terms of primal gap and Frank-Wolfe gap, where 𝑡 is the iteration count. This avoids the use of second-order information or the need to estimate local smoothness parameters of previous work. We also show improved convergence rates for various common cases, e.g., when the feasible region under consideration is uniformly convex or polyhedral. Y1 - 2021 ER - TY - JOUR A1 - Mathieu, Besançon A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - FrankWolfe.jl: a high-performance and flexible toolbox for Frank-Wolfe algorithms and Conditional Gradients JF - INFORMS Journal on Computing N2 - We present FrankWolfe.jl, an open-source implementation of several popular Frank–Wolfe and conditional gradients variants for first-order constrained optimization. The package is designed with flexibility and high performance in mind, allowing for easy extension and relying on few assumptions regarding the user-provided functions. It supports Julia’s unique multiple dispatch feature, and it interfaces smoothly with generic linear optimization formulations using MathOptInterface.jl. Y1 - 2022 U6 - https://doi.org/10.1287/ijoc.2022.1191 VL - 34 IS - 5 SP - 2383 EP - 2865 ER - TY - CHAP A1 - MacDonald, Jan A1 - Besançon, Mathieu A1 - Pokutta, Sebastian T1 - Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings T2 - Proceedings of the International Conference on Machine Learning Y1 - 2022 ER - TY - JOUR A1 - Designolle, Sébastien A1 - Besançon, Mathieu A1 - Iommazzo, Gabriele A1 - Knebel, Sebastian A1 - Gelß, Patrick A1 - Pokutta, Sebastian T1 - Improved Local Models and New Bell Inequalities Via Frank-Wolfe Algorithms JF - Physical Review Research N2 - In Bell scenarios with two outcomes per party, we algorithmically consider the two sides of the membership problem for the local polytope: Constructing local models and deriving separating hyperplanes, that is, Bell inequalities. We take advantage of the recent developments in so-called Frank-Wolfe algorithms to significantly increase the convergence rate of existing methods. First, we study the threshold value for the nonlocality of two-qubit Werner states under projective measurements. Here, we improve on both the upper and lower bounds present in the literature. Importantly, our bounds are entirely analytical; moreover, they yield refined bounds on the value of the Grothendieck constant of order three: 1.4367⩽KG(3)⩽1.4546. Second, we demonstrate the efficiency of our approach in multipartite Bell scenarios, and present local models for all projective measurements with visibilities noticeably higher than the entanglement threshold. We make our entire code accessible as a julia library called BellPolytopes.jl. Y1 - 2023 U6 - https://doi.org/10.1103/PhysRevResearch.5.043059 VL - 5 SP - 043059 ER - TY - CHAP A1 - Wirth, Elias A1 - Kerdreux, Thomas A1 - Pokutta, Sebastian T1 - Acceleration of Frank-Wolfe Algorithms with Open Loop Step-sizes T2 - Proceedings of International Conference on Artificial Intelligence and Statistics Y1 - 2023 ER - TY - CHAP A1 - Zimmer, Max A1 - Spiegel, Christoph A1 - Pokutta, Sebastian T1 - How I Learned to Stop Worrying and Love Retraining T2 - Proceedings of International Conference on Learning Representations Y1 - 2023 ER - TY - JOUR A1 - Pokutta, Sebastian T1 - The Frank-Wolfe algorithm: a short introduction JF - Jahresbericht der Deutschen Mathematiker-Vereinigung Y1 - 2023 ER - TY - CHAP A1 - Zimmer, Max A1 - Spiegel, Christoph A1 - Pokutta, Sebastian T1 - Sparse Model Soups T2 - Proceedings of International Conference on Learning Representations Y1 - 2024 ER - TY - JOUR A1 - Designolle, Sébastien A1 - Iommazzo, Gabriele A1 - Besançon, Mathieu A1 - Knebel, Sebastian A1 - Gelß, Patrick A1 - Pokutta, Sebastian T1 - Improved local models and new Bell inequalities via Frank-Wolfe algorithms JF - Physical Review Research N2 - In Bell scenarios with two outcomes per party, we algorithmically consider the two sides of the membership problem for the local polytope: Constructing local models and deriving separating hyperplanes, that is, Bell inequalities. We take advantage of the recent developments in so-called Frank-Wolfe algorithms to significantly increase the convergence rate of existing methods. First, we study the threshold value for the nonlocality of two-qubit Werner states under projective measurements. Here, we improve on both the upper and lower bounds present in the literature. Importantly, our bounds are entirely analytical; moreover, they yield refined bounds on the value of the Grothendieck constant of order three: 1.4367⩽KG(3)⩽1.4546. Second, we demonstrate the efficiency of our approach in multipartite Bell scenarios, and present local models for all projective measurements with visibilities noticeably higher than the entanglement threshold. We make our entire code accessible as a julia library called BellPolytopes.jl. Y1 - 2023 U6 - https://doi.org/10.1103/PhysRevResearch.5.043059 VL - 5 SP - 043059 ER - TY - CHAP A1 - Gasse, Maxime A1 - Bowly, Simon A1 - Cappart, Quentin A1 - Charfreitag, Jonas A1 - Charlin, Laurent A1 - Chételat, Didier A1 - Chmiela, Antonia A1 - Dumouchelle, Justin A1 - Gleixner, Ambros A1 - Kazachkov, Aleksandr M. A1 - Khalil, Elias A1 - Lichocki, Pawel A1 - Lodi, Andrea A1 - Lubin, Miles A1 - Maddison, Chris J. A1 - Christopher, Morris A1 - Papageorgiou, Dimitri J. A1 - Parjadis, Augustin A1 - Pokutta, Sebastian A1 - Prouvost, Antoine A1 - Scavuzzo, Lara A1 - Zarpellon, Giulia A1 - Yang, Linxin A1 - Lai, Sha A1 - Wang, Akang A1 - Luo, Xiaodong A1 - Zhou, Xiang A1 - Huang, Haohan A1 - Shao, Shengcheng A1 - Zhu, Yuanming A1 - Zhang, Dong A1 - Quan, Tao A1 - Cao, Zixuan A1 - Xu, Yang A1 - Huang, Zhewei A1 - Zhou, Shuchang A1 - Binbin, Chen A1 - Minggui, He A1 - Hao, Hao A1 - Zhiyu, Zhang A1 - Zhiwu, An A1 - Kun, Mao T1 - The Machine Learning for Combinatorial Optimization Competition (ML4CO): results and insights T2 - Proceedings of Conference on Neural Information Processing Systems Y1 - 2022 ER - TY - CHAP A1 - Wirth, Elias A1 - Kera, A1 - Pokutta, Sebastian T1 - Approximate Vanishing Ideal Computations at Scale T2 - Proceedings of International Conference on Learning Representations Y1 - 2023 ER - TY - CHAP A1 - Martínez-Rubio, David A1 - Roux, Christophe A1 - Criscitiello, Christopher A1 - Pokutta, Sebastian T1 - Accelerated Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties T2 - Proceedings of Optimization for Machine Learning (NeurIPS Workshop OPT 2023) Y1 - 2023 ER - TY - CHAP A1 - Martínez-Rubio, David A1 - Pokutta, Sebastian T1 - Accelerated Riemannian optimization: Handling constraints with a prox to bound geometric penalties T2 - Proceedings of Optimization for Machine Learning (NeurIPS Workshop OPT 2022) Y1 - 2022 ER - TY - JOUR A1 - Hunkenschröder, Christoph A1 - Pokutta, Sebastian A1 - Weismantel, Robert T1 - Optimizing a low-dimensional convex function over a high-dimensional cube JF - SIAM Journal on Optimization Y1 - 2022 ER - TY - CHAP A1 - Thuerck, Daniel A1 - Sofranac, Boro A1 - Pfetsch, Marc A1 - Pokutta, Sebastian T1 - Learning cuts via enumeration oracles T2 - Proceedings of Conference on Neural Information Processing Systems Y1 - 2023 ER - TY - CHAP A1 - Sharma, Kartikey A1 - Hendrych, Deborah A1 - Besançon, Mathieu A1 - Pokutta, Sebastian T1 - Network Design for the Traffic Assignment Problem with Mixed-Integer Frank-Wolfe T2 - Proceedings of INFORMS Optimization Society Conference Y1 - 2024 ER - TY - JOUR A1 - Kreimeier, Timo A1 - Pokutta, Sebastian A1 - Walther, Andrea A1 - Woodstock, Zev T1 - On a Frank-Wolfe approach for abs-smooth functions JF - Optimization Methods and Software Y1 - U6 - https://doi.org/10.1080/10556788.2023.2296985 ER - TY - JOUR A1 - Designolle, Sébastien A1 - Vértesi, Tamás A1 - Pokutta, Sebastian T1 - Symmetric multipartite Bell inequalities via Frank-Wolfe algorithms JF - Physics Review A N2 - In multipartite Bell scenarios, we study the nonlocality robustness of the Greenberger-Horne-Zeilinger (GHZ) state. When each party performs planar measurements forming a regular polygon, we exploit the symmetry of the resulting correlation tensor to drastically accelerate the computation of (i) a Bell inequality via Frank-Wolfe algorithms and (ii) the corresponding local bound. The Bell inequalities obtained are facets of the symmetrized local polytope and they give the best-known upper bounds on the nonlocality robustness of the GHZ state for three to ten parties. Moreover, for four measurements per party, we generalize our facets and hence show, for any number of parties, an improvement on Mermin's inequality in terms of noise robustness. We also compute the detection efficiency of our inequalities and show that some give rise to the activation of nonlocality in star networks, a property that was only shown with an infinite number of measurements. Y1 - 2024 U6 - https://doi.org/10.1103/PhysRevA.109.022205 VL - 109 IS - 2 ER - TY - JOUR A1 - Deza, Antoine A1 - Pokutta, Sebastian A1 - Pournin, Lionel T1 - The complexity of geometric scaling JF - Operations Research Letters Y1 - 2024 U6 - https://doi.org/10.1016/j.orl.2023.11.010 VL - 52 SP - 107057 ER - TY - CHAP A1 - Wäldchen, Stephan A1 - Sharma, Kartikey A1 - Turan, Berkant A1 - Zimmer, Max A1 - Pokutta, Sebastian T1 - Interpretability Guarantees with Merlin-Arthur Classifiers T2 - Proceedings of International Conference on Artificial Intelligence and Statistics N2 - We propose an interactive multi-agent classifier that provides provable interpretability guarantees even for complex agents such as neural networks. These guarantees consist of lower bounds on the mutual information between selected features and the classification decision. Our results are inspired by the Merlin-Arthur protocol from Interactive Proof Systems and express these bounds in terms of measurable metrics such as soundness and completeness. Compared to existing interactive setups, we rely neither on optimal agents nor on the assumption that features are distributed independently. Instead, we use the relative strength of the agents as well as the new concept of Asymmetric Feature Correlation which captures the precise kind of correlations that make interpretability guarantees difficult. We evaluate our results on two small-scale datasets where high mutual information can be verified explicitly. Y1 - 2024 ER - TY - JOUR A1 - Braun, Gábor A1 - Guzmán, Cristóbal A1 - Pokutta, Sebastian T1 - Corrections to “Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory” JF - IEEE Transactions on Information Theory N2 - This note closes a gap in the proof of Theorem VI.3 from the article “Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory” (2017). Y1 - 2024 U6 - https://doi.org/10.1109/TIT.2024.3357200 VL - 70 IS - 7 SP - 5408 EP - 5409 ER - TY - CHAP A1 - Martínez-Rubio, David A1 - Roux, Christophe A1 - Pokutta, Sebastian T1 - Convergence and trade-offs in riemannian gradient descent and riemannian proximal point T2 - Proceedings of International Conference on Machine Learning Y1 - 2024 ER - TY - CHAP A1 - Mundinger, Konrad A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Zimmer, Max T1 - Extending the Continuum of Six-Colorings T2 - Proceedings of Discrete Mathematics Days Y1 - 2024 ER - TY - JOUR A1 - Mundinger, Konrad A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Zimmer, Max T1 - Extending the Continuum of Six-Colorings JF - Geombinatorics Quarterly Y1 - 2024 ER - TY - JOUR A1 - Parczyk, Olaf A1 - Pokutta, Sebastian A1 - Spiegel, Christoph A1 - Szabó, Tibor T1 - New Ramsey multiplicity bounds and search heuristics JF - Foundations of Computational Mathematics Y1 - 2024 ER - TY - CHAP A1 - Pauls, Jan A1 - Zimmer, Max A1 - Kelly, Una M A1 - Schwartz, Martin A1 - Saatchi, Sassan A1 - Ciais, Philippe A1 - Pokutta, Sebastian A1 - Brandt, Martin A1 - Gieseke, Fabian T1 - Estimating canopy height at scale T2 - Proceedings of International Conference on Machine Learning Y1 - 2024 ER - TY - CHAP A1 - Kiem, Aldo A1 - Pokutta, Sebastian A1 - Spiegel, Christoph T1 - The 4-color Ramsey multiplicity of triangles T2 - Proceedings of Discrete Mathematics Days Y1 - 2024 ER - TY - CHAP A1 - Kiem, Aldo A1 - Pokutta, Sebastian A1 - Spiegel, Christoph T1 - Categorification of Flag Algebras T2 - Proceedings of Discrete Mathematics Days Y1 - 2024 ER - TY - JOUR A1 - Carderera, Alejandro A1 - Besançon, Mathieu A1 - Pokutta, Sebastian T1 - Scalable Frank-Wolfe on generalized self-concordant functions via simple steps JF - SIAM Journal on Optimization Y1 - 2024 U6 - https://doi.org/10.1137/23M1616789 VL - 34 IS - 3 ER - TY - CHAP A1 - Hendrych, Deborah A1 - Besançon, Mathieu A1 - Pokutta, Sebastian T1 - Solving the optimal experiment design problem with mixed-integer convex methods T2 - 22nd International Symposium on Experimental Algorithms (SEA 2024) N2 - We tackle the Optimal Experiment Design Problem, which consists of choosing experiments to run or observations to select from a finite set to estimate the parameters of a system. The objective is to maximize some measure of information gained about the system from the observations, leading to a convex integer optimization problem. We leverage Boscia.jl, a recent algorithmic framework, which is based on a nonlinear branch-and-bound algorithm with node relaxations solved to approximate optimality using Frank-Wolfe algorithms. One particular advantage of the method is its efficient utilization of the polytope formed by the original constraints which is preserved by the method, unlike alternative methods relying on epigraph-based formulations. We assess our method against both generic and specialized convex mixed-integer approaches. Computational results highlight the performance of our proposed method, especially on large and challenging instances. Y1 - 2024 U6 - https://doi.org/10.4230/LIPIcs.SEA.2024.16 VL - 301 SP - 16:1 EP - 16:22 ER -