Refine
Document Type
- In Proceedings (4)
- Article (1)
Language
- English (5)
Has Fulltext
- no (5)
Is part of the Bibliography
- no (5)
Institute
Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning
[Namkoong and Duchi, 2016, Curi et al., 2020] or learning with non-standard aggregated losses [Shalev-
Shwartz and Wexler, 2016, Fan et al., 2017]. More specifically, these problems are convex-linear problems
where the minimization is carried out over the model parameters w ∈ W and the maximization over the
empirical distribution p ∈ K of the training set indexes, where K is the simplex or a subset of it. To design
efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm.
We argue that the efficiency of such approaches critically depends on the structure of K and propose two
properties of K that facilitate designing efficient algorithms. We focus on a specific family of sets Sn,k
encompassing various learning applications and provide high-probability convergence guarantees to the
minimax values.
It has recently been shown that ISTA, an unaccelerated optimization method, presents sparse updates for the ℓ1-regularized undirected personalized PageRank problem (Fountoulakis et al., 2019), leading to cheap iteration complexity and providing the same guarantees as the approximate personalized PageRank algorithm (APPR) (Andersen et al., 2006). In this work, we design an accelerated optimization algorithm for this problem that also performs sparse updates, providing an affirmative answer to the COLT 2022 open question of Fountoulakis and Yang (2022). Acceleration provides a reduced dependence on the condition number, while the dependence on the sparsity in our updates differs from the ISTA approach. Further, we design another algorithm by using conjugate directions to achieve an exact solution while exploiting sparsity. Both algorithms lead to faster
convergence for certain parameter regimes. Our findings apply beyond PageRank and work for any quadratic objective whose Hessian is a positive-definite 푀-matrix.
The vanishing ideal of a set of points X is the set of polynomials that evaluate to 0 over all points x in X and admits an efficient representation by a finite set of polynomials called generators. To accommodate the noise in the data set, we introduce the Conditional Gradients Approximately Vanishing Ideal algorithm (CGAVI) for the construction of the set of generators of the approximately vanishing ideal. The constructed set of generators captures polynomial structures in data and gives rise to a feature map that can, for example, be used in combination with a linear classifier for supervised learning. In CGAVI, we construct the set of generators by solving specific instances of (constrained) convex optimization problems with the Pairwise Frank-Wolfe algorithm (PFW). Among other things, the constructed generators inherit the LASSO generalization bound and not only vanish on the training but also on out-sample data. Moreover, CGAVI admits a compact representation of the approximately vanishing ideal by constructing few generators with sparse coefficient vectors.