The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 9 of 386
Back to Result List

Large-time asymptotics in deep learning

Submission Status:under review
  • It is by now well-known that practical deep supervised learning may roughly be cast as an optimal control problem for a specific discrete-time, nonlinear dynamical system called an artificial neural network. In this work, we consider the continuous-time formulation of the deep supervised learning problem, and study the latter’s behavior when the final time horizon increases, a fact that can be interpreted as increasing the number of layers in the neural network setting. When considering the classical regularized empirical risk minimization problem, we show that, in long time, the optimal states converge to zero training error, namely approach the zero training error regime, whilst the optimal control parameters approach, on an appropriate scale, minimal norm parameters with corresponding states precisely in the zero training error regime. This result provides an alternative theoretical underpinning to the notion that neural networks learn best in the overparametrized regime, when seen from the large layer perspective. We also propose a learning problem consisting of minimizing a cost with a state tracking term, and establish the well-known turnpike property, which indicates that the solutions of the learning problem in long time intervals consist of three pieces, the first and the last of which being transient short-time arcs, and the middle piece being a long-time arc staying exponentially close to the optimal solution of an associated static learning problem. This property in fact stipulates a quantitative estimate for the number of layers required to reach the zero training error regime. Both of the aforementioned asymptotic regimes are addressed in the context of continuous-time and continuous space-time neural networks, the latter taking the form of nonlinear, integro-differential equations, hence covering residual neural networks with both fixed and possibly variable depths.

Download full text files

Export metadata

Metadaten
Author:Carlos Esteve, Borjan Geshkovski, Dario Pighin, Enrique Zuazua
Document Type:Article
Language:English
Date of Publication (online):2020/08/06
Date of first Publication:2020/10/27
Release Date:2020/10/27
Tag:Neural ODEs; Residual Neural Networks; Supervised Learning; deep learning; optimal control
Subprojects:C03
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International