skip to content
 

Learned forward operators: Variational regularization for black-box models

Presented by: 
Jonas Adler KTH - Royal Institute of Technology
Date: 
Tuesday 31st October 2017 - 15:40 to 16:00
Venue: 
INI Seminar Room 1
Abstract: 
In inverse problems, correct modelling of the forward model is typically one of the most important components to obtain good reconstruction quality. Still, most work is done on highly simplified forward models. For example, in Computed Tomography (CT), the true forward model, given by the solution operator for the radiative transport equation, is typically approximated by the ray-transform. The primary reason for this gross simplification is that the higher quality forward models are both computationally costly, and typically do not have an adjoint of the derivative of the forward operator that can be feasibly evaluated. The community is not un-aware of this miss-match, but the work has been focused on “the model is right, lets fix the data”. We instead propose going the other way around by using machine learning in order to learn a mapping from the simplified model to the complicated model using deep neural networks. Hence instead of learning how to correct complicated data so that it matches a simplified forward model, we accept that the data is always right and instead correct the forward model. We then use this learned forward operator, which is given as a composition of a simplified forward operator and a convolutional neural network, as a forward operator in a classical variational regularization scheme. We give a theoretical argument as to why correcting the forward model is more stable than correcting the data and provide numerical examples in Cone Beam CT reconstruction.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons