The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 6 of 1130
Back to Result List

Local Convergence of Adaptive Gradient Descent Optimizers

  • Adaptive Moment Estimation (ADAM) is a very popular training algorithm for deep neural networks and belongs to the family of adaptive gradient descent optimizers. However to the best of the authors knowledge no complete convergence analysis exists for ADAM. The contribution of this paper is a method for the local convergence analysis in batch mode for a deterministic fixed training set, which gives necessary conditions for the hyperparameters of the ADAM algorithm. Due to the local nature of the arguments the objective function can be non-convex but must be at least twice continuously differentiable. Then we apply this procedure to other adaptive gradient descent algorithms and show for most of them local convergence with hyperparameter bounds.

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics
Metadaten
Author:Sebastian BockORCiD, Martin Georg WeißORCiD
URN:urn:nbn:de:bvb:898-opus4-7546
Document Type:Preprint
Language:English
Year of first Publication:2020
Publishing Institution:Ostbayerische Technische Hochschule Regensburg
Release Date:2020/12/09
Tag:ADAM Optimizer; Convergence; dynamical system; fixed point; momentum method
Institutes:Fakultät Informatik und Mathematik
research focus:Information und Kommunikation