skip to content
 

Deep learning as optimal control problems and Riemannian discrete gradient descent.

Presented by: 
Elena Celledoni
Date: 
Thursday 21st November 2019 - 15:05 to 15:45
Venue: 
INI Seminar Room 2
Abstract: 
We consider recent work where deep learning neural networks have been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. We review the first order conditions for optimality, and the conditions ensuring optimality after discretisation. This leads to a class of algorithms for solving the discrete optimal control problem which guarantee that the corresponding discrete necessary conditions for optimality are fulfilled. The differential equation setting lends itself to learning additional parameters such as the time discretisation. We explore this extension alongside natural constraints (e.g. time steps lie in a simplex). We compare these deep learning algorithms numerically in terms of induced flow and generalisation ability.   References   - M Benning, E Celledoni, MJ Ehrhardt, B Owren, CB Schönlieb, Deep learning as optimal control problems: models and numerical methods, JCD.




The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons