Projection-Free Adaptive Gradients for Large-Scale Optimization

  • The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set. In this respect, stochastic Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens, by querying only approximate first-order information from the objective and by maintaining feasibility of the iterates without using projections. In this paper, we improve the quality of their first-order information by blending in adaptive gradients. We derive convergence rates and demonstrate the computational advantage of our method over the state-of-the-art stochastic Frank-Wolfe algorithms on both convex and nonconvex objectives. The experiments further show that our method can improve the performance of adaptive gradient algorithms for constrained optimization.

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Combettes Cryille W., Christoph Spiegel, Sebastian Pokutta
Document Type:Article
Year of first publication:2020
ArXiv Id:http://arxiv.org/abs/2009.14114