for period 7 Feb 2014
Sparse regularisation for inverse problems
7 Feb 2014
|Friday 07 February|
|10:55-11:00||Welcome from Christie Marr (INI Deputy Director)|
|11:00-11:45||Brinkmann, E-M (Universität Münster)|
|Exploiting joint sparsity information by coupled Bregman iterations||Sem 1|
|Co-authors: Eva-Maria Brinkmann (WWU Münster), Michael Möller (Arnold & Richter Cinetechnik), Tamara Seybold (Arnold & Richter Cinetechnik)
Many applications are concerned with the reconstruction or denoising of multichannel images (color, spectral, time) with a natural prior information of correlated sparsity patterns. The most striking one is joint edge sparsity for different channels of a color image.
We discuss how such prior information can be encoded in Bregman distances for frequently used one-homogeneous functionals, and introduce a novel concept of infimal convolution of Bregman distances. We then discuss appropriate modifications of Bregman iterations towards a coupled reconstruction scheme. First results are presented for color image denoising.
|11:45-12:15||Betcke, M (University College London)|
|A priorconditioned LSQR algorithm for linear ill-posed problems with edge-preserving regularization||Sem 1|
|Co-authors: Simon Arridge (University College London), Lauri Harhanen (Aalto University)
In this talk we present a method for solving large-scale linear inverse problems regularized with a nonlinear, edge-preserving penalty term such as e.g. total variation or Perona–Malik. In the proposed scheme, the nonlinearity is handled with lagged diffusivity fixed point iteration which involves solving a large-scale linear least squares problem in each iteration. The size of the linear problem calls for iterative methods e.g. Krylov methods which are matrix-free i.e. the forward map can be defined through its action on a vector. Because the convergence of Krylov methods for problems with discontinuities is notoriously slow, we propose to accelerate it by means of priorconditioning. Priorconditioning is a technique which embeds the information contained in the prior (expressed as a regularizer in Bayesian framework) directly into the forward operator and hence into the solution space. We derive a factorization-free priorconditioned LSQR algorithm, allowing implicit ap plication of the preconditioner through efficient schemes such as multigrid. We demonstrate the effectiveness of the proposed scheme on a three-dimensional problem in fluorescence diffuse optical tomography using algebraic multigrid preconditioner.
|12:15-13:45||Lunch and student poster session|
|13:45-14:30||Aykroyd, R (University of Leeds)|
|A statistical perspective on sparse regularization and geometric modelling||Sem 1|
|Consider a typical inverse problem where we wish to reconstruct an unknown function from a set of measurements. When the function is discretized it is usual for the number of data points to be insufficient to uniquely determine the unknowns – the problem is ill-posed. One approach is to reduce the size of the set of eligible solutions until it contains only a single solution—the problem is regularized. There are, however, infinitely many possible restrictions each leading to a unique solution. Hence the choice of regularization is crucial, but the best choice, even amongst those commonly used, is still difficult. Such regularized reconstruction can be placed into a statistical setting where data fidelity becomes a likelihood function and regularization becomes a prior distribution. Reconstruction then becomes a statistical inference task solved, perhaps, using the posterior mode. The common regularization approaches then correspond to different choices of prior di stribution. In this talk the ideas of regularized estimation, including ridge, lasso, bridge and elastic-net regression methods, will be defined. Application of sparse regularization to basis function expansions, and other dictionary methods, such as wavelets will be discussed. Their link to smooth and sparse regularization, and to Bayesian estimation, will be considered. As an alternative to locally constrained reconstruction methods, geometric models impose a global structure. Such models are usually problem specific, compared to more generic locally constrained methods, but when the parametric assumptions are reasonable they will make better use of the data, provide simpler models and can include parameters which may be used directly, for example in monitoring or control, without the need for extra post-processing. Finally, the matching of modelling and estimation styles with numerical procedures, to produce efficient algorithms, will be discussed.|
|14:30-15:00||Valkonen, T (University of Cambridge)|
|A primal dual method for inverse problems in MRI with non-linear forward operators||Sem 1|
|Co-authors: Martin Benning (University of Cambridge), Dan Holland (University of Cambridge), Lyn Gladden (University of Cambridge), Carola-Bibiane Schönlieb (University of Cambridge), Florian Knoll (New York University), Kristian Bredies (University of Graz)
Many inverse problems inherently involve non-linear forward operators. In this talk, I concentrate on two examples from magnetic resonance imaging (MRI). One is modelling the Stejskal-Tanner equation in diffusion tensor imaging (DTI), and the other is decomposing a complex image into its phase and amplitude components for MR velocity imaging, in order to regularise them independently. The primal-dual method of Chambolle and Pock being advantageous for convex problems where sparsity in the image domain is modelled by total variation type functionals, I recently extended it to non-linear operators. Besides motivating the algorithm by the above applications, through earlier collaborative efforts using alternative convex models, I will sketch the main ingredients for proving local convergence of the method. Then I will demonstrate very promising numerical performance.
|15:30-16:00||Chen, K (University of Liverpool)|
|Restoration of images with blur and noise - effective models for known and unknown blurs||Sem 1|
|In recent years, the interdisciplinary field of imaging science has been experiencing an explosive growth in active research and applications.
In this talk I shall present some recent and new work of modeling the inverse problem of removing noise and blur in a given and observed image. Here we assume the Gaussian additive noise is present and the blur is defined by some linear filters. Inverting the filtering process does not lead to unique solutions without suitable regularization. There are several cases to discuss:
Firstly I discuss the problem of how to select optimal coupling parameters, given an accurate estimate of the noise level, in a total variation (TV) optimisation model.
Secondly I show a new algorithm for imposing the positivity constraint for the TV model for the case of a known blur.
Finally I show how to generalise the new idea to the blind deconvolution where the blur operator is unknown and must be restored along with the image. Again the TV regularisers are used. However with the splitting idea, our work can be extended to include other high order regularizers such as the mean curvature.
Once an observed image is improved, further tasks such as segmentation and co-registration become feasible. There will be potentially ample applications to follow up.
Joint work with B. Williams, J. P. Zhang, Y.Zheng, S. Harding (Liverpool) and E. Piccolomini, F. Zama (Bologna). Other collaborators in imaging in general include T. F. Chan, R. H. Chan, B. Yu, N. Badshah, H. Ali, L. Rada, C. Brito, L. Sun, F. L. Yang, N. Chumchob, M. Hintermuller, Y. Q. Dong, X. C. Tai, etc.
Related Links: http://www.liv.ac.uk/~cmchenke - Home page
|16:00-16:30||Rickett, J (Schlumberger Cambridge Research)|
|Deghosting seismic data by sparse reconstruction||Sem 1|
|In marine environments, seismic reflection data is typically acquired with acoustic sensors attached to multiple streamers towed relatively close to the sea surface. Upward going waves reflect from the sea surface and destructively interfere with the primary signal. Ideally we would like to deconvolve these “ghost” events from our data. However, their phase delay depends on the angle of propagation at the receiver, and unfortunately, streamer separation is such that most frequencies of interest are aliased, so this angle cannot be easily determined.
In this talk, I will show how the problem can be addressed with the machinery of compressed sensing. I will illustrate with data examples how the trade-offs involved in the choice of basis function, the choice of sparse solver, the dimensionality in which the problem is framed, and the accuracy of the physics in the forward model, all effect the quality and cost of the reconstruction.
|16:30-17:00||Hansen, A (University of Cambridge)|
|Compressed sensing in the real world - The need for a new theory||Sem 1|
|Compressed sensing is based on the three pillars: sparsity, incoherence and uniform random subsampling. In addition, the concepts of uniform recovery and the Restricted Isometry Property (RIP) have had a great impact. Intriguingly, in an overwhelming number of inverse problems where compressed sensing is used or can be used (such as MRI, X-ray tomography, Electron microscopy, Reflection seismology etc.) these pillars are absent. Moreover, easy numerical tests reveal that with the successful sampling strategies used in practice one does not observe uniform recovery nor the RIP. In particular, none of the existing theory can explain the success of compressed sensing in a vast area where it is used. In this talk we will demonstrate how real world problems are not sparse, yet asymptotically sparse, coherent, yet asymptotically incoherent, and moreover, that uniform random subsampling yields highly suboptimal results. In addition, we will present easy arguments explaining why uniform recovery and the RIP is not observed in practice. Finally, we will introduce a new theory that aligns with the actual implementation of compressed sensing that is used in applications. This theory is based on asymptotic sparsity, asymptotic incoherence and random sampling with different densities. This theory supports two intriguing phenomena observed in reality: 1. the success of compressed sensing is resolution dependent, 2. the optimal sampling strategy is signal structure dependent. The last point opens up for a whole new area of research, namely the quest for the optimal sampling strategies.|
|17:00-18:00||Welcome Wine Reception|