TY - JOUR A1 - Roux, Christophe A1 - Pokutta, Sebastian A1 - Wirth, Elias A1 - Kerdreux, Thomas T1 - Efficient Online-Bandit Strategies for Minimax Learning Problems N2 - Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning [Namkoong and Duchi, 2016, Curi et al., 2020] or learning with non-standard aggregated losses [Shalev- Shwartz and Wexler, 2016, Fan et al., 2017]. More specifically, these problems are convex-linear problems where the minimization is carried out over the model parameters w ∈ W and the maximization over the empirical distribution p ∈ K of the training set indexes, where K is the simplex or a subset of it. To design efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm. We argue that the efficiency of such approaches critically depends on the structure of K and propose two properties of K that facilitate designing efficient algorithms. We focus on a specific family of sets Sn,k encompassing various learning applications and provide high-probability convergence guarantees to the minimax values. Y1 - 2021 ER - TY - JOUR A1 - Kerdreux, Thomas A1 - Roux, Christophe A1 - d'Aspremont, Alexandre A1 - Pokutta, Sebastian T1 - Linear Bandits on Uniformly Convex Sets JF - Journal of Machine Learning Research N2 - Linear bandit algorithms yield O~(n√T) pseudo-regret bounds on compact convex action sets K⊂Rn and two types of structural assumptions lead to better pseudo-regret bounds. When K is the simplex or an ℓp ball with p∈]1,2], there exist bandits algorithms with O~(√n√T) pseudo-regret bounds. Here, we derive bandit algorithms for some strongly convex sets beyond ℓp balls that enjoy pseudo-regret bounds of O~(√n√T), which answers an open question from [BCB12, §5.5.]. Interestingly, when the action set is uniformly convex but not necessarily strongly convex, we obtain pseudo-regret bounds with a dimension dependency smaller than O(√n). However, this comes at the expense of asymptotic rates in T varying between O(√T) and O(T). Y1 - 2021 UR - https://www.jmlr.org/papers/v22/21-0277.html VL - 22 IS - 284 SP - 1 EP - 23 ER - TY - CHAP A1 - Martínez-Rubio, David A1 - Roux, Christophe A1 - Criscitiello, Christopher A1 - Pokutta, Sebastian T1 - Accelerated Riemannian Min-Max Optimization Ensuring Bounded Geometric Penalties T2 - Proceedings of Optimization for Machine Learning (NeurIPS Workshop OPT 2023) Y1 - 2023 ER - TY - CHAP A1 - Martínez-Rubio, David A1 - Roux, Christophe A1 - Pokutta, Sebastian T1 - Convergence and trade-offs in riemannian gradient descent and riemannian proximal point T2 - Proceedings of International Conference on Machine Learning Y1 - 2024 ER -