TY - JOUR A1 - Gelß, Patrick A1 - Klus, Stefan A1 - Knebel, Sebastian A1 - Shakibaei, Zarin A1 - Pokutta, Sebastian T1 - Low-Rank Tensor Decompositions of Quantum Circuits JF - Journal of Computational Physics N2 - Quantum computing is arguably one of the most revolutionary and disruptive technologies of this century. Due to the ever-increasing number of potential applications as well as the continuing rise in complexity, the development, simulation, optimization, and physical realization of quantum circuits is of utmost importance for designing novel algorithms. We show how matrix product states (MPSs) and matrix product operators (MPOs) can be used to express certain quantum states, quantum gates, and entire quantum circuits as low-rank tensors. This enables the analysis and simulation of complex quantum circuits on classical computers and to gain insight into the underlying structure of the system. We present different examples to demonstrate the advantages of MPO formulations and show that they are more efficient than conventional techniques if the bond dimensions of the wave function representation can be kept small throughout the simulation. Y1 - 2022 ER - TY - CHAP A1 - Dey, Santanu Sabush A1 - Pokutta, Sebastian T1 - Design and verify: a new scheme for generating cutting-planes T2 - Proceedings of IPCO, Lecture Notes in Computer Science Y1 - 2011 UR - http://www.optimization-online.org/DB_HTML/2011/04/3002.html N1 - Additional Note: DOI: 10.1007/978-3-642-20807-2_12 VL - 6655 SP - 143 EP - 155 ER - TY - JOUR A1 - Dey, Santanu Sabush A1 - Pokutta, Sebastian T1 - Design and verify: a new scheme for generating cutting-planes JF - Mathematical Programming A Y1 - 2014 UR - http://www.optimization-online.org/DB_HTML/2011/04/3002.html N1 - Additional Note: DOI: 10.1007/978-3-642-20807-2_12 VL - 145 SP - 199 EP - 222 ER - TY - JOUR A1 - Bienstock, Daniel A1 - Muñoz, Gonzalo A1 - Pokutta, Sebastian T1 - Principled Deep Neural Network Training Through Linear Programming JF - Discrete Optimization N2 - Deep learning has received much attention lately due to the impressive empirical performance achieved by training algorithms. Consequently, a need for a better theoretical understanding of these problems has become more evident and multiple works in recent years have focused on this task. In this work, using a unified framework, we show that there exists a polyhedron that simultaneously encodes, in its facial structure, all possible deep neural network training problems that can arise from a given architecture, activation functions, loss function, and sample size. Notably, the size of the polyhedral representation depends only linearly on the sample size, and a better dependency on several other network parameters is unlikely. Using this general result, we compute the size of the polyhedral encoding for commonly used neural network architectures. Our results provide a new perspective on training problems through the lens of polyhedral theory and reveal strong structure arising from these problems. Y1 - 2023 U6 - https://doi.org/10.1016/j.disopt.2023.100795 VL - 49 ER - TY - JOUR A1 - Kevin-Martin, Aigner A1 - Bärmann, Andreas A1 - Braun, Kristin A1 - Liers, Frauke A1 - Pokutta, Sebastian A1 - Schneider, Oskar A1 - Sharma, Kartikey A1 - Tschuppik, Sebastian T1 - Data-driven Distributionally Robust Optimization over Time JF - INFORMS Journal on Optimization N2 - Stochastic optimization (SO) is a classical approach for optimization under uncertainty that typically requires knowledge about the probability distribution of uncertain parameters. Because the latter is often unknown, distributionally robust optimization (DRO) provides a strong alternative that determines the best guaranteed solution over a set of distributions (ambiguity set). In this work, we present an approach for DRO over time that uses online learning and scenario observations arriving as a data stream to learn more about the uncertainty. Our robust solutions adapt over time and reduce the cost of protection with shrinking ambiguity. For various kinds of ambiguity sets, the robust solutions converge to the SO solution. Our algorithm achieves the optimization and learning goals without solving the DRO problem exactly at any step. We also provide a regret bound for the quality of the online strategy that converges at a rate of O(log T/T−−√), where T is the number of iterations. Furthermore, we illustrate the effectiveness of our procedure by numerical experiments on mixed-integer optimization instances from popular benchmark libraries and give practical examples stemming from telecommunications and routing. Our algorithm is able to solve the DRO over time problem significantly faster than standard reformulations. Y1 - 2023 U6 - https://doi.org/10.1287/ijoo.2023.0091 VL - 5 IS - 4 SP - 376 EP - 394 ER - TY - CHAP A1 - Carderera, Alejandro A1 - Pokutta, Sebastian A1 - Mathieu, Besançon T1 - Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions T2 - Thirty-fifth Conference on Neural Information Processing Systems, NeurIPS 2021 N2 - Generalized self-concordance is a key property present in the objective function of many important learning problems. We establish the convergence rate of a simple Frank-Wolfe variant that uses the open-loop step size strategy 𝛾𝑡 = 2/(𝑡 + 2), obtaining a O (1/𝑡) convergence rate for this class of functions in terms of primal gap and Frank-Wolfe gap, where 𝑡 is the iteration count. This avoids the use of second-order information or the need to estimate local smoothness parameters of previous work. We also show improved convergence rates for various common cases, e.g., when the feasible region under consideration is uniformly convex or polyhedral. Y1 - 2021 ER - TY - JOUR A1 - Mathieu, Besançon A1 - Carderera, Alejandro A1 - Pokutta, Sebastian T1 - FrankWolfe.jl: a high-performance and flexible toolbox for Frank-Wolfe algorithms and Conditional Gradients JF - INFORMS Journal on Computing N2 - We present FrankWolfe.jl, an open-source implementation of several popular Frank–Wolfe and conditional gradients variants for first-order constrained optimization. The package is designed with flexibility and high performance in mind, allowing for easy extension and relying on few assumptions regarding the user-provided functions. It supports Julia’s unique multiple dispatch feature, and it interfaces smoothly with generic linear optimization formulations using MathOptInterface.jl. Y1 - 2022 U6 - https://doi.org/10.1287/ijoc.2022.1191 VL - 34 IS - 5 SP - 2383 EP - 2865 ER - TY - CHAP A1 - MacDonald, Jan A1 - Besançon, Mathieu A1 - Pokutta, Sebastian T1 - Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings T2 - Proceedings of the International Conference on Machine Learning Y1 - 2022 ER - TY - JOUR A1 - Designolle, Sébastien A1 - Besançon, Mathieu A1 - Iommazzo, Gabriele A1 - Knebel, Sebastian A1 - Gelß, Patrick A1 - Pokutta, Sebastian T1 - Improved Local Models and New Bell Inequalities Via Frank-Wolfe Algorithms JF - Physical Review Research N2 - In Bell scenarios with two outcomes per party, we algorithmically consider the two sides of the membership problem for the local polytope: Constructing local models and deriving separating hyperplanes, that is, Bell inequalities. We take advantage of the recent developments in so-called Frank-Wolfe algorithms to significantly increase the convergence rate of existing methods. First, we study the threshold value for the nonlocality of two-qubit Werner states under projective measurements. Here, we improve on both the upper and lower bounds present in the literature. Importantly, our bounds are entirely analytical; moreover, they yield refined bounds on the value of the Grothendieck constant of order three: 1.4367⩽KG(3)⩽1.4546. Second, we demonstrate the efficiency of our approach in multipartite Bell scenarios, and present local models for all projective measurements with visibilities noticeably higher than the entanglement threshold. We make our entire code accessible as a julia library called BellPolytopes.jl. Y1 - 2023 U6 - https://doi.org/10.1103/PhysRevResearch.5.043059 VL - 5 SP - 043059 ER - TY - CHAP A1 - Wirth, Elias A1 - Kerdreux, Thomas A1 - Pokutta, Sebastian T1 - Acceleration of Frank-Wolfe Algorithms with Open Loop Step-sizes T2 - Proceedings of International Conference on Artificial Intelligence and Statistics Y1 - 2023 ER -