skip to content

Mathematics of deep learning

Participation in INI programmes is by invitation only. Anyone wishing to apply to participate in the associated workshop(s) should use the relevant workshop application form.

1st July 2021 to 17th December 2021
Gitta Kutyniok
Peter Bartlett
Anders Hansen
Arnulf Jentzen
Carola-Bibiane Schönlieb

Programme theme:

Due to the massive amounts of training data complemented by a tremendously increased computing power, deep neural networks have recently seen an impressive comeback. In fact, we currently witness how algorithms based on deep neural networks are infusing numerous aspects of the public sector such as being used for pre-screening job applications or revolutionizing the healthcare industry. A similarly strong impact can be observed on science itself. Deep learning based approaches have proven very useful within certain mathematical problem settings such as solving ill-posed inverse problems or high-dimensional partial differential equations, sometimes already leading to state-of-the-art algorithms. However, most of the related research is still empirically driven and a sound theoretical foundation is largely missing. This is not only a significant problem from a scientific viewpoint, but particularly critical for sensitive applications such as in the health care sector. Thus there exists a tremendous need for mathematics of deep learning.


Aiming to derive a mathematical foundation of deep learning, this programme addresses theoretical questions in two realms:

(1) Theoretical foundations of deep learning independent of a particular application.

(2) Theoretical analysis of the potential and the limitations of deep learning for mathematical methodologies, in particular, for inverse problems and partial differential equations.


Area (1) focusses on the area of expressivity asking how powerful network architecture is

from an approximation viewpoint, on the area of learning aiming to rigorously mathematically analyse training algorithms, and on the area of generalization to understand the ability of neural networks to perform  well on out-of-sample data also. Those three research directions originate from the viewpoint of statistical learning. In addition, our programme aims to study questions related to interpretability, security, and safety of deep learning.


Area (2) focusses on the application of deep neural networks to solve ill-posed inverse problems and partial differential equations. Key research goals in this regime are understanding how deep learning can be optimally combined with model-based methods as well as complete mathematical error analyses for such deep learning based approximation algorithms for ill-posed inverse problems or high-dimensional partial differential equations which reveal the success and the limitations of such algorithms when applied to such mathematical problems.


The main goal of this programme is to achieve substantial progress in developing a theoretical foundation of deep learning. For this, the programme will for the first time gather the top experts from various areas of mathematics and of the theory of machine learning, including computer scientists, physicists, and statisticians in one place, initiating collaborations across intra- and interdisciplinary boundaries and thereby generating unprecedented research dynamics.

University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons