<?xml version="1.0" encoding="utf-8"?>
<export-example>
  <doc>
    <id>8397</id>
    <completedYear/>
    <publishedYear>2021</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst/>
    <pageLast/>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume/>
    <type>article</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Efficient Online-Bandit Strategies for Minimax Learning Problems</title>
    <abstract language="eng">Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning&#13;
[Namkoong and Duchi, 2016, Curi et al., 2020] or learning with non-standard aggregated losses [Shalev-&#13;
Shwartz and Wexler, 2016, Fan et al., 2017]. More specifically, these problems are convex-linear problems&#13;
where the minimization is carried out over the model parameters w ∈ W and the maximization over the&#13;
empirical distribution p ∈ K of the training set indexes, where K is the simplex or a subset of it. To design&#13;
efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm.&#13;
We argue that the efficiency of such approaches critically depends on the structure of K and propose two&#13;
properties of K that facilitate designing efficient algorithms. We focus on a specific family of sets Sn,k&#13;
encompassing various learning applications and provide high-probability convergence guarantees to the&#13;
minimax values.</abstract>
    <enrichment key="PeerReviewed">no</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <author>Christophe Roux</author>
    <submitter> Roux</submitter>
    <author>Sebastian Pokutta</author>
    <author>Elias Wirth</author>
    <author>Thomas Kerdreux</author>
    <collection role="projects" number="MODAL-SynLab">MODAL-SynLab</collection>
    <collection role="projects" number="MODAL-Gesamt">MODAL-Gesamt</collection>
    <collection role="persons" number="pokutta">Pokutta, Sebastian</collection>
    <collection role="institutes" number="ais2t">AI in Society, Science, and Technology</collection>
    <collection role="persons" number="Roux">Roux, Christophe</collection>
    <collection role="projects" number="Math+AA3-7">Math+AA3-7</collection>
  </doc>
  <doc>
    <id>8848</id>
    <completedYear/>
    <publishedYear>2022</publishedYear>
    <thesisYearAccepted/>
    <language>eng</language>
    <pageFirst>2191</pageFirst>
    <pageLast>2209</pageLast>
    <pageNumber/>
    <edition/>
    <issue/>
    <volume>151</volume>
    <type>conferenceobject</type>
    <publisherName/>
    <publisherPlace/>
    <creatingCorporation/>
    <contributingCorporation/>
    <belongsToBibliography>0</belongsToBibliography>
    <completedDate>--</completedDate>
    <publishedDate>--</publishedDate>
    <thesisDateAccepted>--</thesisDateAccepted>
    <title language="eng">Conditional Gradients for the Approximately Vanishing Ideal</title>
    <abstract language="eng">The vanishing ideal of a set of points X is the set of polynomials that evaluate to 0 over all points x in X and admits an efficient representation by a finite set of polynomials called generators. To accommodate the noise in the data set, we introduce the Conditional Gradients Approximately Vanishing Ideal algorithm (CGAVI) for the construction of the set of generators of the approximately vanishing ideal. The constructed set of generators captures polynomial structures in data and gives rise to a feature map that can, for example, be used in combination with a linear classifier for supervised learning. In CGAVI, we construct the set of generators by solving specific instances of (constrained) convex optimization problems with the Pairwise Frank-Wolfe algorithm (PFW). Among other things, the constructed generators inherit the LASSO generalization bound and not only vanish on the training but also on out-sample data. Moreover, CGAVI admits a compact representation of the approximately vanishing ideal by constructing few generators with sparse coefficient vectors.</abstract>
    <parentTitle language="eng">Proceedings of The 25th International Conference on Artificial Intelligence and Statistics</parentTitle>
    <identifier type="url">https://proceedings.mlr.press/v151/wirth22a.html</identifier>
    <enrichment key="PeerReviewed">yes</enrichment>
    <enrichment key="opus.source">publish</enrichment>
    <enrichment key="AcceptedDate">2022-01-18</enrichment>
    <enrichment key="Series">PMLR</enrichment>
    <author>Elias Wirth</author>
    <submitter>Christoph Spiegel</submitter>
    <author>Sebastian Pokutta</author>
    <collection role="persons" number="pokutta">Pokutta, Sebastian</collection>
    <collection role="institutes" number="Mathematical Algorithmic Intelligence">Mathematical Algorithmic Intelligence</collection>
    <collection role="institutes" number="ais2t">AI in Society, Science, and Technology</collection>
    <collection role="projects" number="Math+AA3-7">Math+AA3-7</collection>
  </doc>
</export-example>
