Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)

Convergence and Trade-Offs in Riemannian Gradient Descent and Riemannian Proximal Point

  • In this work, we analyze two of the most fundamental algorithms in geodesically convex optimization: Riemannian gradient descent and (possibly inexact) Riemannian proximal point. We quantify their rates of convergence and produce different variants with several trade-offs. Crucially, we show the iterates naturally stay in a ball around an optimizer, of radius depending on the initial distance and, in some cases, on the curvature. Previous works simply assumed bounded iterates, resulting in rates that were not fully quantified. We also provide an implementable inexact proximal point algorithm and prove several new useful properties of Riemannian proximal methods: they work when positive curvature is present, the proximal operator does not move points away from any optimizer, and we quantify the smoothness of its induced Moreau envelope. Further, we explore beyond our theory with empirical tests.

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics - number of accesses to the document
Metadaten
Author:David Martínez-Rubio, Christophe Roux, Sebastian Pokutta
Document Type:In Proceedings
Parent Title (English):Proceedings of the 41st International Conference on Machine Learning
Volume:235
First Page:34920
Last Page:34948
Series:PMLR
Year of first publication:2024
URL:https://raw.githubusercontent.com/mlresearch/v235/main/assets/marti-nez-rubio24a/marti-nez-rubio24a.pdf
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.