We analyze Neural Ordinary Differential Equations (NODEs) from a control theoretical perspective to address some of the main properties and paradigms of Deep Learning (DL), in particular, data classification and universal approximation. These objectives are tackled and achieved from the perspective of the simultaneous control of systems of NODEs. For instance, in the context of classification, each item to be classified corresponds to a different initial datum for the control problem of the NODE, to be classified, all of them by the same common control, to the location (a subdomain of the euclidean space) associated to each label. Our proofs are genuinely nonlinear and constructive, allowing us to estimate the complexity of the control strategies we develop. The nonlinear nature of the activation functions governing the dynamics of NODEs under consideration plays a key role in our proofs, since it allows deforming half of the phase space while the other half remains invariant, a property that classical models in mechanics do not fulfill. This very property allows to build elementary controls inducing specific dynamics and transformations whose concatenation, along with properly chosen hyperplanes, allows achieving our goals in finitely many steps. The nonlinearity of the dynamics is assumed to be Lipschitz. Therefore, our results apply also in the particular case of the ReLU activation function. We also present the counterparts in the context of the control of neural transport equations, establishing a link between optimal transport and deep neural networks.
In these lecture notes, we address the problem of large-time asymptotic behaviour of the solutions to scalar convection-diffusion equations set in [katex]\mathbb{R}^N[/katex]. The large-time asymptotic behaviour of the solutions to many convection-diffusion equations is strongly linked with the behavior of the initial data at infinity. In fact, when the initial datum is integrable and of mass [katex]M[/katex], the solutions to the equations under consideration oftentimes behave like the associated self-similar profile of mass [katex]M[/katex], thus emphasising the role of scaling variables in these scenarios. However, these equations can also manifest other asymptotic behaviors, including weakly non-linear, linear or strongly non-linear behavior depending on the form of the convective term. We give an exhaustive presentation of several results and techniques, where we clearly distinguish the role of the spatial dimension and the form of the nonlinear convective term.
Control under constraints for multi-dimensional reaction-diffusion monostable and bistable equations
(2020)
Dynamic phenomena in social and biological sciences can often be modeled employing reaction diffusion equations. Frequently in applications, their control plays an important role when avoiding population extinction or propagation of infectious diseases, enhancing multicultural features, etc. When addressing these issues from a mathematical viewpoint one of the main challenges is that, because of the intrinsic nature of the models under consideration, the solution, typically a proportion or a density function, needs to preserve given lower and upper bounds (taking values in [0; 1])).
Controlling the system to the desired final configuration then becomes complex, and sometimes even impossible. In the present work, we analyze the controllability to constant steady states of spatially homogeneous semilinear heat equations, with constraints in the state, and using boundary controls, which is indeed a natural way of acting on the system in the present context. The nonlinearities considered are among the most frequent: monostable and bistable ones. We prove that controlling the system to a constant steadystate may become impossible when the diffusivity is too small (or when the domain is large), due to the existence of barrier functions. When such an obstruction does not arise, we build sophisticated control strategies combining the dissipativity of the system, the existence of traveling waves, some connectivity of the set of steady states. This connectivity allows building paths that the controlled trajectories can follow, in a long time, with small oscillations, preserving the natural constraints of the system.
This kind of strategy was successfully implemented in one space dimension, where phase plane analysis techniques allowed to decode the nature of the set of steady states. These techniques fail in the present multidimensional setting. We employ a fictitious domain technique, extending the system to a larger ball, and building paths of radially symmetric solution that can then be restricted to the original domain. The results are illustrated by numerical simulations of these models that find several applications, such as the extinction of minority languages or the survival of rare species in sufficiently large reserved areas.
We model, simulate and control the guiding problem for a herd of evaders under the action of repulsive drivers. The problem is formulated in an optimal control framework, where the drivers (controls) aim to guide the evaders (states) to a desired region of the Euclidean space.
The numerical simulation of such models quickly becomes unfeasible for a large number of interacting agents. To reduce the computational cost, we use the Random Batch Method (RBM), which provides a computationally feasible approximation of the dynamics. At each time step, the RBM randomly divides the set of particles into small subsets (batches), considering only the interactions inside each batch. Due to the averaging effect, the RBM approximation converges to the exact dynamics as the time discretization gets finer. We propose an algorithm that leads to the optimal control of a fixed RBM approximated trajectory using a classical gradient descent. The resulting control is not optimal for the original complete system, but rather for the reduced RBM model. We then adopt a Model Predictive Control (MPC) strategy to handle the error in the dynamics. While the system evolves in time, the MPC strategy consists in periodically updating the state and computing the optimal control over a long-time horizon, which is implemented recursively in a shorter time-horizon. This leads to a semi-feedback control strategy. Through numerical experiments we show that the combination of RBM and MPC leads to a significant reduction of the computational cost, preserving the capacity of controlling the overall dynamics.
We address the application of stochastic optimization methods for the simultaneous control of parameter-dependent systems. In particular, we focus on the classical Stochastic Gradient Descent (SGD) approach of Robbins and Monro, and on the recently developed Continuous Stochastic Gradient (CSG) algorithm. We consider the problem of computing simultaneous controls through the minimization of a cost functional defined as the superposition of individual costs for each realization of the system. We compare the performances of these stochastic approaches, in terms of their computational complexity, with those of the more classical Gradient Descent (GD) and Conjugate Gradient (CG) algorithms, and we discuss the advantages and disadvantages of each methodology. In agreement with well-established results in the machine learning context, we show how the SGD and CSG algorithms can significantly reduce the computational burden when treating control problems depending on a large amount of parameters. This is corroborated by numerical experiments.
In this work, we analyze the consequences that the so-called turnpike property has on the long-time behavior of the value function corresponding to an optimal control problem. As a by-product, we obtain the long-time behavior of the solution to the associated Hamilton-Jacobi-Bellman equation.
In order to carry out our study, we use the setting of a finite-dimensional linear-quadratic optimal control problem, for which the turnpike property is well understood. We prove that, when the time horizon T tends to infinity, the value function converges to a travelling-front like solution of the form W(x) + c T + λ. In addition, we provide a control interpretation of each of these three terms in the spirit of the turnpike theory. Finally, we compare this asymptotic decomposition with the existing results on long-time behavior for Hamilton-Jacobi equations. We stress that in our case, the Hamiltonian is not coercive in the momentum variable, a case rarely considered in the classical literature about Hamilton-Jacobi equations.
We introduce and study the turnpike property for time-varying shapes, within the viewpoint of optimal control. We focus here on
second-order linear parabolic equations where the shape acts as a source term and we seek the optimal time-varying shape that
minimizes a quadratic criterion. We first establish existence of optimal solutions under some appropriate sufficient conditions. We
then provide necessary conditions for optimality in terms of adjoint equations and, using the concept of strict dissipativity, we prove
that state and adjoint satisfy the measure-turnpike property, meaning that the extremal time-varying solution remains essentially
close to the optimal solution of an associated static problem. We show that the optimal shape enjoys the exponential turnpike
property in term of Hausdorff distance for a Mayer quadratic cost. We illustrate the turnpike phenomenon in optimal shape design
with several numerical simulations.
We present a new proof of the turnpike property for nonlinear optimal control problems, when the running target is a steady control-state pair of the underlying dynamics. Our strategy combines the construction of suboptimal quasi-turnpike trajectories via controllability, and a bootstrap argument, and does not rely on analyzing the optimality system or linearization techniques. This in turn allows us to address several optimal control problems for finite-dimensional, control-affine systems with globally Lipschitz (possibly nonsmooth) nonlinearities, without any smallness conditions on the initial data or the running target. These results are motivated by the large-layer regime of residual neural networks, commonly used in deep learning applications. We show that our methodology is applicable to controlled PDEs as well, such as the semilinear wave and heat equation with a globally Lipschitz nonlinearity, once again without any smallness assumptions.
Abstract. This paper deals with the averaged dynamics for heat equations in the degenerate case where the diffusivity coefficient, assumed to be constant, is allowed to take the null value. First we prove that the averaged dynamics is analytic. This allows to show that, most often, the averaged dynamics enjoys the property of unique continuation and is approximately controllable. We then determine if the averaged dynamics is actually null controllable or not depending on how the density of averaging behaves when the diffusivity vanishes. In the critical density threshold the dynamics of the average is similar to the $\frac{1}{2}$-fractional Laplacian, which is wellknown to be critical in the context of the controllability of fractional diffusion processes. Null controllability then fails (resp. holds) when the density weights more (resp. less) in the null diffusivity regime than in this critical regime.
We study the inverse problem, or inverse design problem, for a time-evolution Hamilton-Jacobi equation. More precisely, given a target function [katex]u_T[/katex] and a time horizon [katex]T > 0[/katex], we aim to construct all the initial conditions for which the viscosity solution coincides with [katex]u_T[/katex] at time [katex]T[/katex]. As it is common in this kind of nonlinear equations, the target might not be reachable. We first study the existence of at least one initial condition leading the system to the given target. The natural candidate, which indeed allows determining the reachability of [katex]u_T[/katex] , is the one obtained by reversing the direction of time in the equation, considering [katex]u_T[/katex] as terminal condition. In this case, we use the notion of backward viscosity solution, that provides existence and uniqueness for the terminal-value problem. We also give an equivalent reachability condition based on a differential inequality, that relates the reachability of the target with its semiconcavity properties. Then, for the case when [katex]u_T[/katex] is reachable, we construct the set of all initial conditions for which the solution coincides with [katex]u_T[/katex] at time [katex]T[/katex]. Note that in general, such initial conditions are not unique. Finally, for the case when the target [katex]u_T[/katex] is not necessarily reachable, we study the projection of [katex]u_T[/katex] on the set of reachable targets, obtained by solving the problem backward and then forward in time. This projection is then identified with the solution of a fully nonlinear obstacle problem, and can be interpreted as the semiconcave envelope of [katex]u_T[/katex] , i.e. the smallest reachable target bounded from below by [katex]u_T[/katex] .