Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization

  • AbstractIn this paper, we present a limited-memory common-directions method for smooth optimization that interpolates between first- and second-order methods. At each iteration, a subspace of a limited dimension size is constructed using first-order information from previous iterations, and an efficient Newton method is deployed to find an approximate minimizer within this subspace. With properly selected subspace of dimension as small as two, the proposed algorithm achieves the optimal convergence rates for first-order methods while remaining a descent method, and it also possesses fast convergence speed on nonconvex problems. Since the major operations of our method are dense matrix-matrix operations, the proposed method can be efficiently parallelized in multicore environments even for sparse problems. By wisely utilizing historical information, our method is also communication-efficient in distributed optimization that uses multiple machines as the Newton steps can be calculated with little communication. Numerical study shows that our method has superior empirical performance on real-world large-scale machine learning problems.

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Ching-pei Lee, Po-Wei Wang, Chih-Jen Lin
DOI:https://doi.org/10.1007/s12532-022-00219-z
ISSN:1867-2949
Parent Title (English):Mathematical Programming Computation
Publisher:Springer Science and Business Media LLC
Document Type:Article
Language:English
Year of Completion:2022
Volume:14
Issue:3
Page Number:49
First Page:543
Last Page:591
Mathematical Programming Computation :MPC 2022 - Issue 3
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.