# Seminars (VMV)

Videos and presentation materials from other INI events are also available.

Search seminar archive

Event When Speaker Title Presentation Material
VMV 30th August 2017
16:00 to 17:00
Michael Hintermüller On (pre) dualization, dense embeddings of convex sets, and applications in image processing
VMVW01 4th September 2017
09:50 to 10:40
Joachim Weickert Efficient and Stable Schemes for 2D Forward-and-Backward Diffusion
Co-author: Martin Welk (UMIT Hall, Austria)

Image enhancement with forward-and-backward (FAB) diffusion is numerically very challenging due to its negative diffusivities. As a remedy, we first extend the explicit nonstandard scheme by Welk et al. (2009) from the 1D scenario to the practically relevant two-dimensional setting. We prove that under a fairly severe time step restriction, this 2D scheme preserves a maximum--minimum principle. Moreover, we find an interesting Lyapunov sequence which guarantees convergence to a flat steady state. Since a global application of the time step size restriction leads to very slow algorithms and is more restrictive than necessary for most pixels, we introduce a much more efficient scheme with locally adapted time step sizes. It applies diffusive interactions of adjacent pixel pairs in a randomized order and adapts the time step size locally. These space-variant time steps are synchronized at sync times which are determined by stability properties of the explicit forward diffusion scheme. Experiments show that our novel two-pixel scheme allows to compute FAB diffusion with guaranteed stability in the maximum norm at a speed that can be three orders of magnitude larger than its explicit counterpart with a global time step size.
VMVW01 4th September 2017
11:10 to 12:00
Yuri Boykov Spectral Clustering meets Graphical Models
Co-authors: Dmitri Marin (UWO), Meng Tang (UWO), Ismail Ben Ayed (ETS, Montreal)

This talk discusses two seemingly unrelated data analysis methodologies: kernel clustering and graphical models. Clustering is widely used for general data where kernel methods are particularly popular due to their discriminating power. Graphical models such as Markov Random Fields (MRF) and related continuous geometric methods represent the state-of-the-art regularization methodology for image segmentation. While both clustering and regularization models are very widely used in machine learning and computer vision, they were not combined before due to significant differences in the corresponding optimization, e.g. spectral relaxation vs. combinatorial methods for submodular optimization and its approximations. This talk reviews the general properties of kernel clustering and graphical models, discusses their limitations (including newly discovered "density biases" in kernel methods), and proposes a general unified framework based on our new bound optimization algor ithm. In particular, we show that popular MRF potentials introduce principled geometric and contextual constraints into clustering, while standard kernel methodology allows graphical models to work with arbitrary high-dimensional features.

VMVW01 4th September 2017
12:00 to 12:50
Alfred Bruckstein On Overparametrization in Variational Methods
The talk will survey the idea of using over-parametrization in variational methods, in cases when some parameterized models for the signals of interest are available. Recently such methods were observed to yield state-of-the art results in recovering optic flow fields and in some signal segmentation problems.
VMVW01 4th September 2017
14:00 to 14:50
Martin Burger Nonlinear Spectral Decomposition
In this talk we will discuss nonlinear spectral decompositions in Banach spaces, which shed a new light on multiscale methods in imaging and open new possibilities of filtering techniques. We provide a novel geometric interpretation of nonlinear eigenvalue problems in Banach spaces and provide conditions under which gradient flows for norms or seminorms yield a spectral decomposition. We will see that under these conditions standard variational schemes are equivalent to the gradient flows for arbitrary large time step, recovering previous results e.g. for the one dimensional total variation flow as special cases.

The talk is based on joint work with Guy Gilboa, Michael Moeller, Martin Benning, Daniel Cremers, Lina Eckardt
VMVW01 4th September 2017
14:50 to 15:40
Guy Gilboa Nonlinear spectral analysis - beyond the convex case
A brief overview will be given on current results in nonlinear eigenvalue analysis for one homogeneous functionals. We will then discuss how one can go beyond the convex framework by analyzing decay patterns of iterative filtering, based on sparsity constraints.

VMVW01 4th September 2017
16:10 to 17:00
Jean-Francois Aujol Video colorization by a variational approach
This work provides a new method to colorize gray-scale images. While the reverse operation is only a matter of standard, the colorization process is an ill-posed problem that requires some priors. In the literature two classes of approach exist. The first class includes manual methods that needs the user to manually add colorson the image to colorize. The second class includes exemplar-based approacheswhere a color image, with a similar semantic content, is provided as input to the method.These two types of priors have their own advantages and drawbacks. In this work, a new variational framework for exemplar-based colorization is proposed. A non-local approach is used to find relevant color in the source image in order to suggest colors on the gray-scale image. The spatial coherency of the result as well as the final color selection is provided by a non-convex variational framework based on a total variation. An efficient primal-dual algorithm. In this work, we also extend the proposed exemplar-based approach to combine both exemplar-based and manual methods. It provides a single framework that unifies advantages of both approaches. Finally,experiments and comparisons with state-of-the-art methods illustrate the efficiency of our method.
This is joint work with Fabien Pierre, Aurélie Bugeau, Nicolas Papadakis, and Vinh Tong Ta.
VMVW01 5th September 2017
09:00 to 09:50
Antonin Chambolle Minimization of curvature dependent functional.
In this joint work with T. Pock (TU Graz, Austria) we present a relaxation of line energies which depend on the curvature, such as the elastica functional, introduced in particular to complete contours in image inpainting problems. Our relaxation is convex and tight on C^2 curves.
VMVW01 5th September 2017
09:50 to 10:40
Ke Chen Fractional Order Derivatives Regularization: Models, Algorithms and Applications
In variational imaging and other inverse problem modeling, regularisation plays a major role.In recent years, high order regularizers such as the mean curvature, the Gaussian curvature and Euler's elastica are increasingly studied and applied, and many impressive results over the widely-used gradient based models are reported.

Here we present some results from studying another class of high and non-integer order regularisers based on fractional order derivatives and focus on two aspects of this class of models:(i) theoretical analysis and advantages; (ii) efficient algorithms.We found that models with regularization by fractional order derivatives are convex in a suitable space and algorithms exploiting structured matrices can be employed to design efficient algorithms.Applications to restoration and registration are illustrated. This opens many opportunities to apply these regularisers to a wide class of imaging problems.

Ke Chen and J P Zhang, EPSRC Liverpool Centre for Mathematics in Healthcare,Centre for Mathematical Imaging Techniques,   and Department of Mathematical Sciences,The University of Liverpool,United Kingdom[ http://tinyurl.com/EPSRC-LCMH
VMVW01 5th September 2017
11:10 to 12:00
Michael Ng Tensor Data Analysis: Models and Algorithms
In this talk, we discuss some models and algorithms for tensor data analysis. Examples in imaging sciences are presented to illustrate the results of the proposed models and algorithms.
VMVW01 5th September 2017
12:00 to 12:50
Kristian Bredies Preconditioned and accelerated Douglas-Rachford algorithms for the solution of variational imaging problems
Co-author: Hongpeng Sun (Renmin University of China)

We present preconditioned and accelerated versions of the Douglas-Rachford (DR) splitting method for the solution of convex-concave saddle-point problems which often arise in variational imaging. The methods enable to replace the solution of a linear system in each iteration step in the corresponding DR iteration by approximate solvers without the need of controlling the error. These iterations are shown to converge in Hilbert space under minimal assumptions on the preconditioner and for any step-size. Moreover, ergodic sequences associated with the iteration admit at least a convergence rate in terms of restricted primal-dual gaps. Further, strong convexity of one or both of the involved functionals allow for acceleration strategies that yield improved rates of and
for , respectively.

The methods are applied to non-smooth and convex variational imaging problems. We discuss denoising and deconvolution with and discrepancy and total variation (TV) as well as total generalized variation (TGV) penalty. Preconditioners which are specific to these problems are presented, the results of numerical experiments are shown and the benefits of the respective preconditioned iterations are discussed.

VMVW01 5th September 2017
12:30 to 18:00
Computational Challenges in Image Processing - http://www.turing-gateway.cam.ac.uk/event/ofbw32
OFBW32 5th September 2017
13:30 to 13:40
Christie Marr, Jane Leeks Welcome and Introduction
OFBW32 5th September 2017
13:40 to 13:50
Antonin Chambolle Organiser Introduction
OFBW32 5th September 2017
13:50 to 14:25
Alexandre Gramfort Statistical Machine Learning and Optimisation Challenges for Brain Imaging at a Millisecond Timescale
OFBW32 5th September 2017
14:25 to 15:00
Andrew Curtis Nonlinear Tomography
OFBW32 5th September 2017
15:20 to 15:45
Roger Noble Validating Machine Learning Models Visually with Zegami
OFBW32 5th September 2017
15:45 to 16:10
Mark Bray Computational Challenges for Long Range Imaging
OFBW32 5th September 2017
16:10 to 16:35
Peter Fretwell Imaging Whales from Space
OFBW32 5th September 2017
16:35 to 17:00
Open Discussion and Questions
OFBW32 5th September 2017
17:00 to 18:00
Drinks Reception and Networking
VMVW01 6th September 2017
09:00 to 09:50
Laurent Cohen Geodesic Methods for Interactive Image Segmentation using Finsler metrics
Minimal paths have been used for long as an interactive tool to find edges or tubular structures as cost minimizing curves. The user usually provides start and end points on the image and gets the minimal path as output. These minimal paths correspond to minimal geodesics according to some adapted metric. They are a way to find a (set of) curve(s) globally minimizing the geodesic active contours energy. Finding a geodesic distance can be solved by the Eikonal equation using the fast and efficient Fast Marching method.
Different metrics can be adapted to various problems. In the past years we have introduced different extensions of these minimal paths that improve either the interactive aspects or the results. For example, the metric can take into account both scale and orientation of the path. This leads to solving an anisotropic minimal path in a 2D or 3D+radius space.
We recently introduced the use of Finsler metrics allowing to take into account the local curvature in order to smooth the path. It can also be adapted to take into account a region term inside the closed curve formed by a set of minimal geodesics.

Co-authors: Da Chen and J.-M. Mirebeau

VMVW01 6th September 2017
09:50 to 10:40
This talk will address several issues related to training neural networks using stochastic gradient methods.  First, we'll talk about the difficulties of training in a distributed environment, and present a new method called centralVR for boosting the scalability of training methods.  Then, we'll talk about the issue of automating stochastic gradient descent, and show that learning rate selection can be simplified using "Big Batch" strategies that adaptively choose minibatch sizes.
VMVW01 6th September 2017
11:10 to 12:00
Vladimir Kolmogorov Valued Constraint Satisfaction Problems
I will consider the Valued Constraint Satisfaction Problem (VCSP), whose goal is to minimize a sum of local terms where each term comes from a fixed set of functions (called a "language") over a fixed discrete domain. I will present recent results characterizing languages that can be solved using the basic LP relaxation. This includes languages consisting of submodular functions, as well as their generalizations.

One of such generalizations is k-submodular functions. In the second part of the talk I will present an application of such functions in computer vision.

Based on joint papers with Igor Gridchyn, Andrei Krokhin, Michal Rolinek, Johan Thapper and Stanislav Zivny.
VMVW01 6th September 2017
12:00 to 12:50
Sung Ha Kang Efficient numerical Methods For Variational inpainting models
Co-authors: Maryam Yashtini (Georgia Institute of Technology), Wei Zhu (The University of Alabama)

Recent developments of fast algorithms, based on operator splitting, augmented Lagrangian, and alternating minimization, enabled us to revisit some of the variational image inpainting models. In this talk, we will present some fast algorithms for Euler's Elastica image inpainting model, and variational edge-weighted image colorization model based on chromaticity and brightness models. Main ideas of the models and algorithms, some analysis and numerical results will be presented.
VMVW01 6th September 2017
14:00 to 14:50
Jalal Fadili Sensitivity Analysis with Degeneracy: Mirror Stratifiable Functions
This talk will present a set of sensitivity analysis and activity identification results for a class of convex functions with a strong geometric structure, that we coin mirror-stratifiable''. These functions are such that there is a bijection between a primal and a dual stratification of the space into partitioning sets, called strata. This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms. This class of functions encompasses all regularizers routinely used in signal and image processing, machine learning, and statistics. We show that this mirror-stratifiable'' structure enjoys a nice sensitivity theory, allowing us to study stability of solutions of optimization problems to small perturbations, as well as activity identification of first-order proximal splitting-type algorithms.

Existing results in the literature typically assume that, under a non-degeneracy condition, the active set associated to a minimizer is stable to small perturbations and is identified in finite time by optimization schemes. In contrast, our results do not require any non-degeneracy assumption: in consequence, the optimal active set is not necessarily stable anymore, but we are able to track precisely the set of identifiable strata. We show that these results have crucial implications when solving challenging ill-posed inverse problems via regularization, a typical scenario where the non-degeneracy condition is not fulfilled. Our theoretical results, illustrated by numerical simulations,  allow to characterize the instability behaviour of the regularized solutions, by locating the set of all low-dimensional strata that can be potentially identified by these solutions.

This is a joint work with Jérôme Malick and Gabriel Peyré.

VMVW01 6th September 2017
14:50 to 15:40
Zuoqiang Shi Low dimensional manifold model for image processing
In this talk, I will introduce a novel low dimensional manifold model for image processing problem.
This model is based on the observation that for many natural images, the patch manifold usually has low dimension
structure. Then, we use the dimension of the patch manifold as a regularization to recover the original image.
Using some formula in differential geometry, this problem is reduced to solve Laplace-Beltrami equation on manifold.
The Laplace-Beltrami equation is solved by the point integral method. Numerical tests show that this method gives very good results in image inpainting, denoising and super-resolution problem.
This is joint work with Stanley Osher and Wei Zhu.
VMVW01 6th September 2017
16:10 to 17:00
Gabriele Steidl Convex Analysis in Hadamard Spaces
joint work with M. Bacak, R. Bergmann, M. Montag and J. Persch

The aim of the talk is two-fold:

1. A well known result of H. Attouch states that the Mosco convergence of a sequence of proper convex lower semicontinuous functions defined on a Hilbert space is equivalent to the pointwise convergence of the associated Moreau envelopes. In the present paper we generalize this result to Hadamard spaces. More precisely, while it has already been known that the Mosco convergence of a sequence of convex lower semicontinuous functions on a Hadamard space implies the pointwise convergence of the corresponding Moreau envelopes, the converse implication was an open question. We now fill this gap.  Our result has several consequences. It implies, for instance, the equivalence of the Mosco and Frolik-Wijsman convergences of convex sets. As another application, we show that there exists a~complete metric on the cone of proper convex lower semicontinuous functions on a separable Hadamard space such that a~sequence of functions converges in this metric if and only if it converges in the sense of Mosco.

2. We extend the parallel Douglas-Rachford algorithm  to the manifold-valued setting.
VMVW01 7th September 2017
09:00 to 09:50
Xue-Cheng Tai Fast Algorithms for Euler´s Elastica energy minimization and applications
This talk is divided into three parts. In the first part, we will introduce the essential ideas in using Augmented Lagrangian/operator-splitting techniques for fast numerical algorithms for minimizing Euler's Elastica energy. In the 2nd part, we consider an Euler's elastica based image segmentation model. An interesting feature of this model lies in its preference of convex segmentation contour. However, due to the high order and non-differentiable term, it is often nontrivial to minimize the associated functional. In this work, we propose using augmented Lagrangian method to tackle the minimization problem. Especially, we design a novel augmented Lagrangian functional that deals with the mean curvature term differently as those ones in the previous works. The new treatment reduces the number of Lagrange multipliers employed, and more importantly, it helps represent the curvature more effectively and faithfully. Numerical experiments validate the efficiency of the proposed augmented Lagrangian method and also demonstrate new features of this particular segmentation model, such as shape driven and data driven properties. In the 3rd part, we will introduce some recent fast algorithms for minimizing Euler's elastica energy for interface problems. The method combine level set and binary representations of interfaces. The algorithm only needs to solve an Rodin-Osher-Fatemi problem and a re-distance of the level set function to minimize the elastica energy. The algorithm is easy to implement and fast with efficiency. The content of this talk is based joint works with Egil Bae, Tony Chan, Jinming Duan and Wei Zhu. Related links: 1) ftp://ftp.math.ucla.edu/pub/camreport/cam17-36.pdf 2) https://www.researchgate.net/profile/Xue_Cheng_Tai/publication/312519936_Augmented_Lagrangian_method_for_an_Euler's_elastica_based_segmentation_model_that_promotes_convex_contours/links/58a1b9d292851c7fb4c1907f/Augmented-Lagrangian-method-for-an-Eulers-elastica-based-segmentation-model-that-promotes-convex-contours.pdf 3) https://www.researchgate.net/publication/257592616_Image_Segmentation_Using_Euler%27s_Elastica_as_the_Regularization.
VMVW01 7th September 2017
09:50 to 10:40
Thomas Pock End-to-end learning of CNN features in in discrete optimization models for motion and stereo
Co-authors: Patrick Knöbelreiter (Graz University of Technology), Alexander Shekhovtsov (Technical University of Prague), Gottfried Munda (Graz University of Technology), Christian Reinbacher (Amazon)

For many years, discrete optimization models such as conditional random fields (CRFs) have defined the state-of-the-art for classical correspondence problems such as motion and stereo. One of the most important ingredients in those models is the choice of the feature transform that is used to compute the similarity between images patches. For a long time, hand crafted features such as the celebrated scale invariant feature transform (SIFT) defined the state-of-the-art. Triggered by the recent success of convolutional neural networks (CNNs), it is quite natural to learn such a feature transform from data. In this talk, I will show how to efficiently learn such CNN features from data using an end-to-end learning approach. It turns out that our learned models yields state-of-the-art results on a number of established benchmark databases.

VMVW01 7th September 2017
11:10 to 12:00
Dimitris Metaxas tba
VMVW01 7th September 2017
12:00 to 12:50
Yiqiu Dong Directional Regularization for Image Reconstruction
In this talk, I will introduce a new directional regularization based on the total generalized variation (TGV), which is very useful for applications with strong directional information. I will show that it has the same essential properties as TGV. With automatic direction estimators, we demonstrate the improvement of using directional TGV compared to standard TGV. Numerical simulations are carried out for image restoration and  computed tomography reconstruction.
VMVW01 7th September 2017
14:00 to 14:50
Michael Moeller Sublabel-Accurate Relaxation of Nonconvex Energies
In this talk I will present a convex relaxation technique for a particular class of energy functionals consisting of a pointwise nonconvex data term and a total variation regularization as frequently used in image processing and computer vision problems. The method is based on the technique of functional lifting in which the minimization problem is reformulated in a higher dimensional space in order to obtain a tighter approximation of the original nonconvex energy.
VMVW01 7th September 2017
14:50 to 15:40
Audrey Repetti Joint imaging and calibration using non-convex optimization
Co-authors: Jasleen Birdi (Heriot Watt University), Yves Wiaux (Heriot Watt University)

New generations of imaging devices aim to produce high resolution and high dynamic range images. In this context, the high dimensionality associated inverse problems can become extremely challenging from an algorithmic view point. In addition, the quality and accuracy of the reconstructed images often depend on the precision with which the imaging device has previously been calibrated. Unfortunately, calibration does not depend only on the device but may also rely on the time and on the direction of the acquisitions. This leads to the need of performing joint image reconstruction and calibration, and thus of solving non-convex blind deconvolution problems.

We focus on the joint calibration and imaging problem in the context of radio-interferometric imaging in astronomy. In this case, the sparse images of interest can reach gigapixel or terapixel size, while the calibration variables consist of a large number of low resolution images related to each antenna of the telescope. To solve this problem, we leverage a block-coordinate forward-backward algorithm, specifically designed to minimize non-smooth non-convex and high dimensional objective functions. We demonstrate by simulation the performance of this first joint imaging and calibration method in radio-astronomy.
VMVW01 7th September 2017
16:10 to 17:00
Christian Clason Convex regularization of discrete-valued inverse problems
We consider inverse problems where where a distributed parameter is known a priori to only take on values from a given discrete set. This property can be promoted in Tikhonov regularization with the aid of a suitable convex but nondifferentiable regularization term. This allows applying standard approaches to show well-posedness and convergence rates in Bregman distance. Using the specific properties of the regularization term, it can be shown that convergence (albeit without rates) actually holds pointwise. Furthermore, the resulting Tikhonov functional can be minimized efficiently using a semi-smooth Newton method. Numerical examples illustrate the properties of the regularization term and the numerical solution.

This is joint work with Thi Bich Tram Do, Florian Kruse, and Karl Kunisch.
VMVW01 8th September 2017
09:00 to 09:50
Mila Nikolova Alternating proximal gradient descent for nonconvex regularised problems with multiconvex coupling terms
Co-author: Pauline Tan

There has been an increasing interest in constrained nonconvex  regularized block multiconvex optimization problems. We introduce an  approach that effectively exploits the multiconvex structure of the coupling term and enables complex application-dependent regularization terms to be used. The proposed Alternating Structure-Adapted Proximal gradient descent algorithm enjoys simple well defined updates. Global convergence of the algorithm to a critical point is proved using the so-called Kurdyka-Lojasiewicz  property. What is more, we prove that a large class of useful objective functions obeying our assumptions are subanalytic and thus satisfy the Kurdyka-Lojasiewicz property. Finally, present an application of the algorithm to big-data air-born sequences of images.

VMVW01 8th September 2017
09:50 to 10:40
Michael Unser Representer theorems for ill-posed inverse problems: Tikhonov vs. generalized total-variation regularization
In practice, ill-posed inverse problems are often dealt with by introducing a suitable regularization functional. The idea is to stabilize the problem while promoting "desirable" solutions. Here, we are interested in contrasting the effect Tikhonov vs. total-variation-like regularization. To that end, we first consider a discrete setting and present two representer theorems that characterize the solution of general convex minimization problems subject to $\ell_2$ vs. $\ell_1$ regularization constraints. Next, we adopt a continuous-domain formulation where the regularization semi-norm is a generalized version of total-variation tied to some differential operator L. We prove that the extreme points of the corresponding minimization problem are nonuniform L-splines with fewer knots than the number of measurements. For instance, when L is the derivative operator, then the solution is piecewise constant, which confirms a standard observation and explains why the solution is intrinsically sparse. The powerful aspect of this characterization is that it applies to any linear inverse problem.
VMVW01 8th September 2017
11:10 to 12:00
Pierre Weiss Estimation of linear operators from scattered impulse responses
Co-authors: Paul Escande (Université de Toulouse), Jérémie Bigot (Université de Toulouse)

In this talk, I will propose a variational method to reconstruct operators with smooth kernels from scattered and noisy impulse responses. The proposed approach relies on the formalism of smoothing in reproducing kernel Hilbert spaces and on the choice of an appropriate regularization term that takes the smoothness of the operator into account. It is numerically tractable in very large dimensions and yields a representation that can be used for achieving fast matrix-vector products. We study the estimator's robustness to noise and analyze its approximation properties with respect to the size and the geometry of the dataset. It turns out to be minimax optimal.

We finally show applications of the proposed algorithms to reconstruction of spatially varying blur operators in microscopy imaging.

VMVW01 8th September 2017
12:00 to 12:50
Olga Veksler Adaptive and Move Making Auxiliary Cuts for Binary Pairwise Energies
Co-author: Lena Gorelick (University of Western Ontario)

Many computer vision problems require optimization of binary non-submodular energies. In this context, local iterative submodularization techniques based on trust region (LSA-TR) and auxiliary functions (LSA-AUX) have been recently proposed. They achieve state-of-the-art-results on a number of computer vision applications. We extend the LSA-AUX framework in two directions. First, unlike LSA-AUX, which selects auxiliary functions based solely on the current solution, we propose to incorporate several additional criteria. This results in tighter bounds for configurations that are more likely or closer to the current solution. Second, we propose move-making extensions of LSA-AUX which achieve tighter bounds by restricting the search space. Finally, we evaluate our methods on several applications. We show that for each application at least one of our extensions significantly outperforms the original LSA-AUX. Moreover, the best extension of LSA-AUX is comparable to or better than LSA-TR on four out of six applications.
VMVW01 8th September 2017
14:00 to 14:50
Thomas Vogt Optimal Transport-Based Total Variation for Functional Lifting and Q-Ball Imaging
Co-Author: Jan Lellmann (Institute of Mathematics and Image Computing, University of Lübeck)

One strategy in functional lifting is to consider probability measures on the label space of interest, which can be discrete or continuous. The considered functionals often make use of a total variation regularizer which, when lifted, allows for a dual formulation introducing a Lipschitz constraint. In our recent work, we proposed to use a similar formulation of total variation for the restoration of so-called Q-Ball images. In this talk, we present a mathematical framework for total variation regularization that is inspired from the theory of Optimal Transport and that covers all of the previous cases, including probability measures on discrete and continuous label spaces and on manifolds. This framework nicely explains the above-mentioned Lipschitz constraint and comes with a robust theoretical background.
VMVW01 8th September 2017
14:50 to 15:40
Martin Holler Total Generalized Variation for Manifold-valued Data
Co-authors: Kristian Bredies (University of Graz), Martin Storath (University of Heidelberg), Andreas Weinmann (Darmstadt University of Applied Sciences)

Introduced in 2010, the total generalized variation (TGV) functional is nowadays amongst the most successful regularization functionals for variational image reconstruction. It is defined for an arbitrary order of differentiation and provides a convex model for piecewise smooth vector-space data. On the other hand, variational models for manifold-valued data have become popular recently and many successful approaches, such as first- and second-order TV regularization, have been successfully generalized to this setting. Despite the fact that TGV regularization is, generally, considered to be preferable to such approaches, an appropriate extension for manifold-valued data was still missing. In this talk we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present concrete instances that fulfill the proposed axioms. We prove well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifold-valued data.
VMVW01 8th September 2017
16:10 to 17:00
Tammy Riklin raviv Variational Methods to Image Segmentation
In the talk I will present variational methods to image segmentation with application to brain MRI tissue classification. In particular I will present an `unconventinal'  use of the multinomial logistic regression function.
The work is based on a joint work with Jacob Goldberger, Shiri Gordon and Boris Kodner.

TGMW47 19th September 2017
09:00 to 17:00
IMA Conference on Inverse Problems from Theory to Application
TGMW47 20th September 2017
09:00 to 17:00
IMA Conference on Inverse Problems from Theory to Application (copy)
TGMW47 21st September 2017
09:00 to 17:00
IMA Conference on Inverse Problems from Theory to Application
VMV 26th September 2017
15:00 to 16:00
Kewei Zhang On the existence of weak solutions of the Perona-Malik equation
The Perona-Malik forward-backward diffusion model is a well-known device in image processing to reduce noise and enhance edges. However due to its forward-backward nature, the Perona-Malik model is mathematically ill-posed. I will describe a differential inclusion method to solve the one-dimensional version of the Perona-Malik equation under the homogeneous Neumann boundary condition. Possible constructions of staircase solutions are also suggested.  Recent development on higher dimensional cases are also briefly described.

VMV 27th September 2017
16:00 to 17:00
Andrew Fitzgibbon Discrete images, continuous world: A better basis for discussion?

VMV 28th September 2017
16:00 to 17:00
Joachim Weickert Rothschild Lecture: Image Compression with Differential Equations
Partial differential equations (PDEs) are widely used to model phenomena in nature. In this talk we will see that they also have a high potential to compress digital images.

The idea sounds temptingly simple: We keep only a small amount of the pixels and reconstruct the remaining data with PDE-based interpolation. This gives rise to three interdependent questions:

1. Which data should be kept?
2. What are the most useful PDEs?
3. How can the selected data be encoded efficiently?

Solving these problems requires to combine ideas from different mathematical disciplines such as mathematical modelling, optimisation, interpolation and approximation, and numerical methods for PDEs.

Since the talk is intended for a broad audience, we focus on the main ideas, and no specific knowledge in image processing is required.
VMV 6th October 2017
15:30 to 16:30
Ron Kimmel Learning Invariants and Representation Spaces of Shapes and Forms
VMV 12th October 2017
11:00 to 12:00
Lena Frerking Joint Motion Estimation and Image Reconstruction for Dynamic X-ray Tomography
VMV 20th October 2017
10:00 to 11:00
Daniel Tenbrinck Graph Methods for Manifold-valued Data
Next to traditional processing tasks there exist real applications in which measured data are not in a Euclidean vector space but rather are given on a Riemannian manifold. This is the case, e.g., when dealing with Interferometric Synthetic Aperture Radar (InSAR) data consisting of phase values or data obtained in Diffusion Tensor Magnetic Resonance Imaging (DT-MRI). In this talk we present a framework for processing discrete manifold-valued data, for which the underlying (sampling) topology is modeled by a graph. We introduce the notion of a manifold-valued derivative on a graph and based on this deduce a family of manifold-valued graph operators. In particular, we introduce the graph p-Laplacian and graph infinity-Laplacian for manifold-valued data. We discuss a simple numerical scheme to compute a solution to the corresponding parabolic PDEs and apply this algorithm to different manifold-valued data, illustrating the diversity and flexibility of the proposed framework in denoising and inpainting applications. This is joint work with Dr. Ronny Bergmann (TU Kaiserslautern).
VMV 26th October 2017
15:30 to 16:30
Stacey Levine Denoising geometric image features
Given a noisy image, it can sometimes be more productive to denoise a transformed version of the image rather than process the image data directly. In this talk we will discuss two novel frameworks for image denoising, one that involves denoising the noisy image’s level line curvature and another that regularizes the components of the noisy image in a moving frame that encodes its local geometry. Both cases satisfy nice unexpected properties that provide justification for this framework. Experiments confirm the improvement when using this approach in terms of both PSNR and SSIM as well as visually.
VMVW02 30th October 2017
09:50 to 10:40
James Nagy Spectral Computed Tomography
Co-authors: Martin Andersen (Technical University of Denmark), Yunyi Hu (Emory University)

An active area of interest in tomographic imaging is the goal of quantitative imaging, where in addition to producing an image, information about the material composition of the object is recovered. In order to obtain material composition information, it is necessary to better model of the image formation (i.e., forward) problem and/or to collect additional independent measurements. In x-ray computed tomography (CT), better modeling of the physics can be done by using the more accurate polyenergetic representation of source x-ray beams, which requires solving a challenging nonlinear ill-posed inverse problem. In this talk we explore the mathematical and computational problem of polyenergetic CT when it is used in combination with new energy-windowed spectral CT detectors. We formulate this as a regularized nonlinear least squares problem, which we solve by a Gauss-Newton scheme. Because the approximate Hessian system in the Gauss-Newton scheme is very ill-conditioned, we propose a preconditioner that effectively clusters eigenvalues and, therefore, accelerates convergence when the conjugate gradient method is used to solve the linear subsystems. Numerical experiments illustrate the convergence, effectiveness, and significance of the proposed method.
VMVW02 30th October 2017
11:10 to 12:00
VMVW02 30th October 2017
12:00 to 12:50
Christoph Brune Cancer ID - From Spectral Segmentation to Deep Learning
One of the most important challenges in health is the fight against cancer. A desired goal is the early detection and guided therapy of cancer patients. A very promising approach is the detection and quantification of circulating tumor cells in blood, called liquid biopsy. However, this task is similar to looking for needles in a haystack, where the needles even have unclear shapes and materials. There is a strong need for reliable image segmentation, classification and a better understanding of the generative composition of tumor cells. For a robust and reproducible quantification of tumor cell features, automatic multi-scale segmentation is the key. In recent years, new theory and algorithms for nonlinear, non-local eigenvalue problems via spectral decomposition have been developed and shown to result in promising segmentation and classification results. We analyze different nonlinear segmentation approaches and evaluate how informative the resulting spectral responses are. The success of our analysis is supported by results of simulated cells and first European clinical studies. In the last part of this talk we switch the viewpoint and study first results for deep learning of tumor cells. Via generative models there is hope for understanding tumor cells much better, however many mathematical questions arise. This is a joint work with Leonie Zeune, Stephan van Gils, Guus van Dalum and Leon Terstappen.
VMVW02 30th October 2017
14:00 to 14:50
Lars Ruthotto PDE-based Algorithms for Convolution Neural Network
This talk presents a new framework for image classification that exploits the relationship between the training of deep Convolution Neural Networks (CNNs) to the problem of optimally controlling a system of nonlinear partial differential equations (PDEs). This new interpretation leads to a variational model for CNNs, which provides new theoretical insight into CNNs and new approaches for designing learning algorithms. We exemplify the myriad benefits of the continuous network in three ways. First, we show how to scale deep CNNs across image resolutions using multigrid methods. Second, we show how to scale the depth of deep CNNS in a shallow-to-deep manner to gradually increase the flexibility of the classifier. Third, we analyze the stability of CNNs and present stable variants that are also reversible (i.e., information can be propagated from input to output layer and vice versa), which in combination allows training arbitrarily deep networks with limited computational resources. This is joint work with Eldad Haber (UBC), Lili Meng (UBC), Bo Chang (UBC), Seong-Hwan Jun (UBC), Elliot Holtham (Xtract Technologies)
VMVW02 30th October 2017
14:50 to 15:40
Gitta Kutyniok Optimal Approximation with Sparsely Connected Deep Neural Networks
Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called $\alpha$-shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates. This is joint work with H. Bölcskei (ETH Zurich), P. Grohs (Uni Vienna), and P. Petersen (TU Berlin).
VMVW02 30th October 2017
15:40 to 16:00
Eva-Maria Brinkmann Enhancing fMRI Reconstruction by Means of the ICBTV-Regularisation Combined with Suitable Subsampling Strategies and Temporal Smoothing
Based on the magnetic resonance imaging (MRI) technology, fMRI is a noninvasive functional neuroimaging method, which provides maps of the brain at different time steps, thus depicting brain activity by detecting changes in the blood flow and hence constituting an important tool in brain research.
An fMRI screening typically consists of three stages: At first, there is a short low-resolution prescan to ensure the proper positioning of the proband or patient. Secondly, an anatomical high resolution MRI scan is executed and finally the actual fMRI scan is taking place, where a series of data is acquired via fast MRI scans at consecutive time steps thus illustrating the brain activity after a stimulus. In order to achieve an adequate temporal resolution in the fMRI data series, usually only a specific portion of the entire k-space is sampled.
Based on the assumption that the full high-resolution MR image and the fast acquired actual fMRI frames share a similar edge set (and hence the sparsity pattern with respect to the gradient), we propose to use the Infimal Convolution of Bregman Distances of the TV functional (ICBTV), first introduced in \cite{Moeller_et_al}, to enhance the quality of the reconstructed fMRI data by using the full high-resolution MRI scan as a prior. Since in fMRI the hemodynamic response is commonly modelled by a smooth function, we moreover discuss the effect of suitable subsampling strategies in combination with temporal regularisation.

This is joint work with Julian Rasch, Martin Burger (both WWU Münster) and with Ville Kolehmainen (University of Eastern Finland).

[1] {Moeller_et_al} M. Moeller, E.-M. Brinkmann, M. Burger, and T. Seybold: Color Bregman TV. SIAM J. Imag. Sci. 7(4) (2014), pp. 2771-2806.
VMVW02 30th October 2017
16:30 to 17:20
Joan Bruna Geometry and Topology of Neural Network Optimization
Co-author: Daniel Freeman (UC Berkeley)

The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of high-dimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of simplifying the nonlinear nature of the model.

In this work, we do not make any such assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. We first take a topological approach and characterize absence of bad local minima by studying the connectedness of the loss surface level sets. Our theoretical work quantifies and formalizes two important facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay.

The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors. Joint work with Daniel Freeman (UC Berkeley).
VMVW02 30th October 2017
17:20 to 18:10
Justin Romberg Structured solutions to nonlinear systems of equations
We consider the question of estimating a solution to a system of equations that involve convex nonlinearities, a problem that is common in machine learning and signal processing. Because of these nonlinearities, conventional estimators based on empirical risk minimization generally involve solving a non-convex optimization program. We propose a method (called "anchored regression”) that is based on convex programming and amounts to maximizing a linear functional (perhaps augmented by a regularizer) over a convex set. The proposed convex program is formulated in the natural space of the problem, and avoids the introduction of auxiliary variables, making it computationally favorable. Working in the native space also provides us with the flexibility to incorporate structural priors (e.g., sparsity) on the solution. For our analysis, we model the equations as being drawn from a fixed set according to a probability law. Our main results provide guarantees on the accuracy of the estimator in terms of the number of equations weare solving, the amount of noise present, a measure of statistical complexity of the random equations, and thegeometry of the regularizer at the true solution. We also provide recipes for constructing the anchor vector (that determines the linear functional to maximize) directly from the observed data. We will discuss applications of this technique to nonlinear problems including phase retrieval, blind deconvolution, and inverting the action of a neural network. This is joint work with Sohail Bahmani.
VMVW02 31st October 2017
09:00 to 09:50
Stacey Levine Denoising Geometric Image Features
Given a noisy image, it can sometimes be more productive to denoise a transformed version of the image rather than process the image data directly. In this talk we will discuss several novel frameworks for image denoising, including one that involves smoothing the noisy image’s level line curvature and another that regularizes the components of the noisy image in a moving frame that encodes its local geometry. Both frameworks satisfy some nice unexpected properties that provide justification for this framework. Experiments confirm an improvement over the usual denoising paradigm in terms of both PSNR and SSIM. Moreover, this approach provides a mechanism for preserving geometry in solutions of sparse patch based models that typically exploit self similarity. This is joint work with Thomas Batard, Marcelo Bertalmio, and Gabriela Ghimpeteanu.
VMVW02 31st October 2017
09:50 to 10:40
Ozan Öktem Task Oriented Reconstruction using Deep Learning

Machine learning has been used if image reconstruction for several years, mostly driven by the recent advent of deep learning. Deep learning based reconstruction methods have been shown to give good reconstruction quality by learning a reconstruction operator that maps data directly to reconstruction. These methods typically perform very well when performance is measured using classical quantified, such as the RMSE, but they tend to produce over-smoothed images, reducing their usefulness in applications.

We propose a framework based on statistical decision theory that allows learning a reconstruction operator that is optimal with respect to a given task, which  can be segmentation of a tumor or classification. In this framework, deep learning is used not only to solve the inverse problem, but also to simultaneously learn how to use the reconstructed image in order to complete an end-task. We demonstrate that the framework is computationally feasible and that it can improve human interpretability of the reconstructions. We also suggest new research directions in the field of data driven, task oriented image reconstruction.

Related publications:
http://arxiv.org/abs/1704.04058 (accepted for publication in Inverse Problems)
http://arxiv.org/abs/1707.06474 (submitted to IEEE Transaction for Medical Imaging)
VMVW02 31st October 2017
11:10 to 12:00
Lior Horesh Accelerated Free-Form Model Discovery of Interpretable Models using Small Data
The ability to abstract the behavior of a system or a phenomenon and distill it into a consistent mathematical model is instrumental for a broad range of applications. Historically, models were manually derived in a first principles fashion. The first principles approach often offers the derivation of interpretable models of remarkable levels of universality using little data. Nevertheless, their derivation is time consuming and relies heavily upon domain expertise. Conversely, with the rising pervasiveness of data-driven approaches, the rapid derivation and deployment of models has become a reality. Scalability is gained through dependence upon exploitable structure (functional form). Such structures, in turn, yield non-interpretable models, require Big Data for training, and provide limited predictive power outside the training set span. In this talk, we will introduce an accelerated model discovery approach that attempts to bridge between the two conducts, to enable the discovery of universal, interpretable models, using Small Data. To accomplish that, the proposed algorithm searches for free-form symbolic models, where neither the structure nor the set of operator primitives are predetermined. The discovered models are provably globally optimal, promoting superior predictive power for unseen input. Demonstration of the algorithm in re-discovery of some fundamental laws of science will be provided, and references to on-going work in the discovery of new models for, hitherto, unexplainable phenomena. Globally optimal symbolic regression, NIPS Interpretable ML Workshop, 2017, https://arxiv.org/abs/1710.10720 Globally optimal Mixed Integer Non-Linear Programming (MINLP) formulation for symbolic regression, IBM Technical Report ID 219095, 2016
VMVW02 31st October 2017
12:00 to 12:50
Martin Benning Nonlinear Eigenanalysis of sparsity-promoting regularisation operators
In this talk we analyse Eigenfunctions of nonlinear, variational regularisation operators. We show that they are closely related to a generalisation of singular vectors of compact operators, and demonstrate key mathematical properties. We use them to show how a systematic bias of variational regularisation methods can be corrected with the help of iterative regularisation methods, and discuss conditions that guarantee the decomposition of an additive composition of multiple Eigenfunctions. In the last part of the talk, we focus on utilising the concept of nonlinear Eigenanalysis to learn parametrised regularisations that can effectively separate different geometric structures. This is joint work with Joana Sarah Grah, Guy Gilboa, Carola-Bibiane Schönlieb, Marie Foged Schmidt and Martin Burger.
VMVW02 31st October 2017
14:00 to 14:50
Alfred Hero The tensor graphical lasso (Teralasso)
Co-authors: Kristjian Greenewald (Harvard University), Shuheng Zhou (University of Michigan), Alfred Hero (University of Michigan)

We propose a new ultrasparse graphical model for representing multiway data based on a Kronecker sum representation of the process inverse covariance matrix. This statistical model decomposes the inverse covariance into a linear Kronecker sum representation with sparse Kronecker factors.

Under the assumption that the multiway observations are matrix-normal the l1 sparsity regularized log-likelihood function is convex and admits significantly faster statistical rates of convergence than other sparse matrix normal algorithms such as graphical lasso or Kronecker graphical lasso.

We specify a scalable composite gradient descent method for minimizing the objective function and analyze both the statistical and the computational convergence ratesm, showing that the composite gradient descent algorithm is guaranteed to converge at a geometric rate to the global minimizer. We will illustrate the method on several real multiway datasets, showing that we can recover sparse graphical structures in high dimensional data.

VMVW02 31st October 2017
14:50 to 15:40
Francis Bach Breaking the Curse of Dimensionality with Convex Neural Networks

We consider neural networks with a single hidden layer and non-decreasing positively homogeneous activation functions like the rectified linear units. By letting the number of hidden units grow unbounded and using classical non-Euclidean regularization tools on the output weights, they lead to a convex optimization problem and we provide a detailed theoretical analysis of their generalization performance, with a study of both the approximation and the estimation errors. We show in particular that they are adaptive to unknown underlying linear structures, such as the dependence on the projection of the input variables onto a low-dimensional subspace. Moreover, when using sparsity-inducing norms on the input weights, we show that high-dimensional non-linear variable selection may be achieved, without any strong assumption regarding the data and with a total number of variables potentially exponential in the number of observations. However, solving this convex optimization pro blem in infinite dimensions is only possible if the non-convex subproblem of addition of a new unit can be solved efficiently. We provide a simple geometric interpretation for our choice of activation functions and describe simple conditions for convex relaxations of the finite-dimensional non-convex subproblem to achieve the same generalization error bounds, even when constant-factor approximations cannot be found. We were not able to find strong enough convex relaxations to obtain provably polynomial-time algorithms and leave open the existence or non-existence of such tractable algorithms with non-exponential sample complexities.

- JMLR paper
VMVW02 31st October 2017
15:40 to 16:00
Jonas Adler Learned forward operators: Variational regularization for black-box models
In inverse problems, correct modelling of the forward model is typically one of the most important components to obtain good reconstruction quality. Still, most work is done on highly simplified forward models. For example, in Computed Tomography (CT), the true forward model, given by the solution operator for the radiative transport equation, is typically approximated by the ray-transform. The primary reason for this gross simplification is that the higher quality forward models are both computationally costly, and typically do not have an adjoint of the derivative of the forward operator that can be feasibly evaluated. The community is not un-aware of this miss-match, but the work has been focused on “the model is right, lets fix the data”. We instead propose going the other way around by using machine learning in order to learn a mapping from the simplified model to the complicated model using deep neural networks. Hence instead of learning how to correct complicated data so that it matches a simplified forward model, we accept that the data is always right and instead correct the forward model. We then use this learned forward operator, which is given as a composition of a simplified forward operator and a convolutional neural network, as a forward operator in a classical variational regularization scheme. We give a theoretical argument as to why correcting the forward model is more stable than correcting the data and provide numerical examples in Cone Beam CT reconstruction.
VMVW02 31st October 2017
16:30 to 17:20
Julianne Chung Advancements in Hybrid Iterative Methods for Inverse Problems
Hybrid iterative methods are increasingly being used to solve large, ill-posed inverse problems, due to their desirable properties of (1) avoiding semi-convergence, whereby later reconstructions are no longer dominated by noise, and (2) enabling adaptive and automatic regularization parameter selection. In this talk, we describe some recent advancements in hybrid iterative methods for computing solutions to large-scale inverse problems. First, we consider a hybrid approach based on the generalized Golub-Kahan bidiagonalization for computing Tikhonov regularized solutions to problems where explicit computation of the square root and inverse of the covariance kernel for the prior covariance matrix is not feasible. This is useful for large-scale problems where covariance kernels are defined on irregular grids or are only available via matrix-vector multiplication, e.g., those from the Matern class. Second, we describe flexible hybrid methods for solving l_p regularized inverse problems, where we approximate the p-norm penalization term as a sequence of 2-norm penalization terms using adaptive regularization matrices, and we exploit flexible preconditioning techniques to efficiently incorporate the weight updates. We introduce a flexible Golub-Kahan approach within a Krylov-Tikhonov hybrid framework, such that our approaches extend to general (non-square) l_p regularized problems. Numerical examples from dynamic photoacoustic tomography and space-time deblurring demonstrate the range of applicability and effectiveness of these approaches. This is joint work with Arvind Saibaba, North Carolina State University, and Silvia Gazzola, University of Bath.
VMVW02 31st October 2017
17:20 to 18:10
Andreas Hauptmann Learning iterative reconstruction for high resolution photoacoustic tomography
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed-up. In this work we present a deep neural network that is specifically designed to provide high resolution 3D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artefacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung CT scans and then applied to in-vivo photoacoustic measurement data.
VMVW02 1st November 2017
09:00 to 09:50
Mila Nikolova Below the Surface of the Non-Local Bayesian Image Denoising Method
joint work with Pablo Arias CMLA, ENS Cachan, CNRS, University Paris-Saclay The non-local Bayesian (NLB) patch-based approach of Lebrun, Buades, and Morel [1] is considered as a state-of-the-art method for the restoration of (color) images corrupted by white Gaussian noise. It gave rise to numerous ramiifications like e.g., possible improvements, processing of various data sets and video. This work is the first attempt to analyse the method in depth in order to understand the main phenomena underlying its effectiveness. Our analysis, corroborated by numerical tests, shows several unexpected facts. In a variational setting, the first-step Bayesian approach to learn the prior for patches is equivalent to a pseudo-Tikhonov regularisation where the regularisation parameters can be positive or negative. Practically very good results in this step are mainly due to the aggregation stage - whose importance needs to be re-evaluated. Reference [1] Lebrun, M., Buades, A., Morel, J.M.: A nonlocal Bayesian image denoising algorithm. SIAM J. Imaging Sci.6(3), 1665-1688 (2013)
VMVW02 1st November 2017
09:50 to 10:40
Xavier Bresson Convolutional Neural Networks on Graphs
Convolutional neural networks have greatly improved state-of-the-art performances in computer vision and speech analysis tasks, due to its high ability to extract multiple levels of representations of data. In this talk, we are interested in generalizing convolutional neural networks from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, telecommunication networks, or words' embedding. We present a formulation of convolutional neural networks on graphs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Numerical experiments demonstrate the ability of the system to learn local stationary features on graphs.
VMVW02 1st November 2017
11:10 to 12:00
Julie Delon High-Dimensional Mixture Models For Unsupervised Image Denoising (HDMI)
This work addresses the problem of patch-based image denoising through the unsupervised learning of a probabilistic high-dimensional mixture models on the noisy patches. The model, named HDMI, proposes a full modeling of the process that is supposed to have generated the noisy patches. To overcome the potential estimation problems due to the high dimension of the patches, the HDMI model adopts a parsimonious modeling which assumes that the data live in group-specific subspaces of low dimensionalities. This parsimonious modeling allows in turn to get a numerically stable computation of the conditional expectation of the image which is applied for denoising. The use of such a model also permits to rely on model selection tools to automatically determine the intrinsic dimensions of the subspaces and the variance of the noise. This yields a blind denoising algorithm that demonstrates state-of-the-art performance, both when the noise level is known and unknown. Joint work with Charles Bouveyron and Antoine Houdard.
VMVW02 1st November 2017
12:00 to 12:50
Bangti Jin Sparse Recovery by l0 Penalty
Sparsity is one of the powerful tools for signal recovery, and has achieved great success in many practical applications. Conventionally this is realized numerically by imposing an l1 penalty, which is the convex relaxation of the l0 penalty. In this talk, I will discuss our recent efforts in the efficient numerical solution of the l0 problem. I will describe a primal dual active set algorithm, and present some numerical results to illustrate its convergence. This talk is based on joint work with Dr. Yuling Jiao and Dr. Xiliang Lu.
VMVW02 2nd November 2017
09:00 to 09:50
Silvia Gazzola Krylov Subspace Methods for Sparse Reconstruction
Krylov subspace methods are popular numerical linear algebra tools that can be successfully employed to regularize linear large-scale inverse problems, such as those arising in image deblurring and computed tomography. Though they are commonly used as purely iterative regularization methods (where the number of iterations acts as a regularization parameter), they can be also employed in a hybrid fashion, i.e., to solve Tikhonov regularized problems (where both the number of iterations and and the Tikhonov parameter play the role of regularizations parameters, which can be chosen adaptively). Krylov subspace methods can naturally handle unconstrained penalized least squares problems. The goal of this talk is to present a common framework that exploits a flexible version of well-known Krylov methods such as CGLS and GMRES to handle nonnegativity constraints and regularization terms expressed with respect to the 1-norm, resulting in an efficient way to enforce sparse reconstructions of the solution. Numerical experiments and comparisons with other well-known methods for the computation of nonnegative and sparse solutions will be presented. These results have been obtained working jointly with James Nagy (Emory University), Paolo Novati (University of Trieste), Yves Wiaux (Heriot-Watt University), and Julianne Chung (Virginia Polytechnic Institute and State University).
VMVW02 2nd November 2017
09:50 to 10:40
Pierre Weiss Generating sampling patterns in MRI
In this work I will describe a few recent results for the generation of sampling patterns in MRI. In the first part of my talk, I will provide mathematical models describing the sampling problem in MRI. This will allow me to show that the traditional way mathematicians look at an MRI scanner is usually way too idealized and that important ingredients are currently missing in the theories. The mathematical modelling shows that a natural way to generate a pattern consists in projecting a density onto a set of admissible measures. I will then describe two projection algorithms. The first one is based on a distance defined through a convolution mapping the measures to L^2, while the second is based on the L^2 transportation distance. After describing a few original applications of this formalism, I will show how it allows to significantly improve scanning times in MRI systems with real in vivo experiments. An outcome of this work is that compressed sensing, as it stands, only allows for moderate acceleration factors, while other ideas that take advantage of all the degrees of freedom of an MRI scanner yield way more significant improvements.
VMVW02 2nd November 2017
11:10 to 12:00
Anders Hansen On computational barriers in data science and the paradoxes of deep learning
The use of regularisation techniques such as l^1 and Total Variation in Basis Pursuit and Lasso, as well as linear and semidefinite programming and neural networks (deep learning) has seen great success in data science. Yet, we will discuss the following paradox: it is impossible to design algorithms to find minimisers accurately for these problems when given inaccurate input data, even when the inaccuracies can be made arbitrarily small. The paradox implies that any algorithm designed to solve these problems will fail in the following way: For fixed dimensions and any small accuracy parameter epsilon > 0, one can choose an arbitrary large time T and find an input such that the algorithm will run for longer than T and still not have reached epsilon accuracy. Moreover, it is impossible to determine when the algorithm should halt to achieve an epsilon accurate solution. The largest epsilon for which this failure happens is called the Breakdown-epsilon. Typically, the Breakdown-epsilon > 1/2 even when the the input is bounded by one, is well-conditioned, and the objective function can be computed with arbitrary accuracy.

Despite the paradox we explain why empirically many modern algorithms perform very well in real-world scenarios. In particular, when restricting to subclasses of problems the Breakdown epsilon may shrink. Moreover, typically one can find polynomial time algorithms in L and n, where L   log(1/Breakdown-epsilon), any algorithm, even randomised, becomes arbitrarily slow and will not be able to halt and guarantee L correct digits in the output.

The above result leads to the paradoxes of deep learning: (1) One cannot guarantee the existence of algorithms for accurately training the neural network, even if there is one minimum and no local minima. Moreover, (2) one can have 100% success rate on arbitrarily many test cases, yet uncountably many misclassifications on elements that are arbitrarily close to the training set.

This is joint work with Alex Bastounis (Cambridge) and Verner Vlacic (ETH)
VMVW02 2nd November 2017
12:00 to 12:50
Josiane Zerubia Stochastic geometry for automatic object detection and tracking
In this talk, we combine the methods from probability theory and stochastic geometry to put forward new solutions to the multiple object detection and tracking problem in high resolution remotely sensed image sequences. First, we present a spatial marked point process model to detect a pre-defined class of objects based on their visual and geometric characteristics. Then, we extend this model to the temporal domain and create a framework based on spatio-temporal marked point process models to jointly detect and track multiple objects in image sequences. We propose the use of simple parametric shapes to describe the appearance of these objects. We build new, dedicated energy based models consisting of several terms that take into account both the image evidence and physical constraints such as object dynamics, track persistence and mutual exclusion. We construct a suitable optimization scheme that allows us to find strong local minima of the proposed highly non-convex energy. As the simulation of such models comes with a high computational cost, we turn our attention to the recent filter implementations for multiple objects tracking, which are known to be less computationally expensive. We propose a hybrid sampler by combining the Kalman filter with the standard Reversible Jump MCMC. High performance computing techniques are also used to increase the computational efficiency of our method. We provide an analysis of the proposed framework. This analysis yields a very good detection and tracking performance at the price of an increased complexity of the models. Tests have been conducted both on high resolution satellite and UAV image sequences
VMVW02 2nd November 2017
14:00 to 14:50
Mario Figueiredo Divide and Conquer: Patch-based Image Denoising, Restoration, and Beyond
Patch-based image processing methods can be seen as an application of the “divide and conquer” strategy: since it is admittedly too difficult to formulate a global prior for an entire image, methods in this class process overlapping patches thereof, and combine the results to obtain an image estimate. A particular class of patch-based methods uses Gaussian mixtures models (GMM) to model the patches, in what can be seen as yet another application of the divide and conquer principle, now in the space of patch configurations. Different components of the GMM specialize in modeling different types of typical patch configurations. Although many other statistical image models exist, using a GMM for patches has several relevant advantages: (i) the corresponding minimum mean squared error (MMSE) estimate can be obtained in closed form; (ii) the variance of the estimate can also be computed, providing a principled way to weight the estimates when combining the patch estimates to obtain the full image estimate; (iii) the GMM parameters can be estimated from a dataset of clean patches, from the noisy image itself, or from a combination of the two; (iv) theoretically, a GMM can approximate arbitrarily well any probability density (under mild conditions). In this talk, I will overview the class of patch/GMM-based approaches to image restoration. After reviewing the first members of this family of methods, which simply addressed denoising, I will describe several more recent advances, namely: use of class-adapted GMMs (i.e., tailored to specific image classes, such as faces, fingerprints, text); tackling inverse problems other than denoising (namely, deblurring, hyperspectral super-resolution, compressive imaging), by plugging GMM-based denoisers in the loop of an iterative algorithm (in what has recently been called the plug-and-play approach); joint restoration/segmentation of images; application to blind deblurring. This is joint work with Afonso Teodoro, José Bioucas-Dias, Marina Ljubenović, and Milad Niknejad.
VMVW02 2nd November 2017
14:50 to 15:40
Marcelo Pereyra Bayesian analysis and computation for convex inverse problems: theory, methods, and algorithms
This talk presents some new developments in theory, methods, and algorithms for performing Bayesian inference in high-dimensional inverse problems that are convex, with application to mathematical and computational imaging. These include new efficient stochastic simulation and optimisation Bayesian computation methods that tightly combine proximal optimisation with Markov chain Monte Carlo techniques; strategies for estimating unknown model parameters and performing model selection, methods for calculating Bayesian confidence intervals for images and performing uncertainty quantification analyses; and new theory regarding the role of convexity in maximum-a-posteriori and minimum-mean-square-error estimation. The new theory, methods, and algorithms are illustrated with a range of mathematical imaging experiments.
VMVW02 2nd November 2017
15:40 to 16:00
Pol del Aguila Pla Cell detection by functional inverse diffusion and group sparsity
Biological assays in which particles generated by cells bind to a surface and can be imaged to reveal the cells' location are ubiquitous in biochemical, pharmacological and medical research. In this talk, I will describe the physics of these processes, a 3D radiation-diffusion-adsorption-desorption partial differential equation, and present our novel parametrization of its solution (i.e., the observation model) in terms of convolutional operators. Then, I will present our proposal to invert this observation model through a functional optimization problem with group-sparsity regularization and explain the reasoning behind this choice of regularizer. I will also present the results needed to derive the accelerated proximal gradient algorithm for this problem, and justify why we chose to formulate the algorithm in the original function spaces where our observation model operates. Finally, I will briefly comment on our choice of discretization, and show the final performance of our algorithm in both synthetic and real data. arXiv preprints: arXiv:1710.0164 , arXiv:1710.01622
VMVW02 2nd November 2017
16:30 to 17:20
Claire Boyer Structured compressed sensing and recent theoretical advances on optimal sampling
Joint works with Jérémie Bigot and Pierre Weiss on the one hand, and Ben Adcock on the other hand. First, we will theoretically justify the applicability of compressed sensing (CS) in real-life applications. To do so, CS theorems compatible with physical acquisition constraints will be introduced. These results do not only encompass structure in the acquisition but also structured sparsity of the signal of interest. This theory considerably extends the standard framework of CS. Secondly, recent advances on optimal sampling in CS will be presented, in the sense that the sampling strategy minimizes the bound on the required number of measurements for CS recovery.
VMVW02 2nd November 2017
17:20 to 18:10
Tuomo Valkonen What do regularisers do?
Which regulariser is the best? Is any of them any good? Do they introduce artefacts? What other qualitative properties do they have? These are some questions, on which I want to shed some light of the early dawn. Specifically, I will firstly discuss recent work on natural conditions, based on an analytical study of a bilevel learning approach, to ensure that regularisation does indeed improve an image. Based on a more computational study, based on bilevel learning, I will also try to answer the question, which constructed regulariser is the best one. Secondly, I will discuss geometrical aspects of the solutions to higher-order regularised imaging problems.
VMVW02 3rd November 2017
09:00 to 09:50
Irene Waldspurger Alternating projections for phase retrieval with random sensing vectors
Phase retrieval consists in reconstructing an unknown element in a complex vector space from the modulus of linear measurements. The first reconstruction algorithms for which theoretical guarantees could be proven relied on convexification techniques. It has only recently been realized that similar guarantees hold for non-convex local search methods, that are faster and simpler, provided that their starting point is carefully chosen. We will explain how to establish these guarantees for the most well-known local search method: alternating projections. We will also discuss the role of the initialization method.
VMVW02 3rd November 2017
09:50 to 10:40
Martin Holler Analysis and applications of structural-prior-based total variation regularization for inverse problems
Structural priors and joint regularization techniques, such as parallel level set methods and joint total variation, have become quite popular recently in the context of variational image processing. Their main application scenario are particular settings in multi-modality/multi-spectral imaging, where there is an expected correlation between different channels of the image data. In this context, one can distinguish between two different approaches for exploiting such correlations: Joint reconstruction techniques that tread all available channels equally and structural prior techniques that assume some ground truth structural information to be available. This talk focuses on a particular instance of the second type of methods, namely structural total-variation-type functionals, i.e., functionals which integrate a spatially-dependent pointwise function of the image gradient for regularization. While this type of methods has been shown to work well in practical applications, some of their analytical properties are not immediate. Those include a proper definition for BV functions and non-smooth a-priory data as well as existence results and regularization properties for standard inverse problem settings. In this talk we address some of these issues and show how they can partially be overcome using duality. Employing the framework of functions of a measure, we define structural-TV-type functionals via lower-semi-continuous relaxation. Since the relaxed functionals are, in general, not explicitly available, we show that instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of the relaxation is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. The talk concludes with proof-of-concept numerical examples. This is joint work with M. Hintermüller and K. Papafitsoros (both from the Weierstrass Institute Berlin)
VMVW02 3rd November 2017
11:10 to 12:00
Raymond Chan A Nuclear-norm Model for Multi-Frame Super-resolution Reconstruction
In this talk, we give a new variational approach to obtain super-resolution images from multiple low-resolution image frames extracted from video clips. First the displacement between the low-resolution frames and the reference frame are computed by an optical flow algorithm. The displacement matrix is then decomposed into product of two matrices corresponding to the integer and fractional displacement matrices respectively. The integer displacement matrices give rise to a non-convex low-rank prior which is then convexified to give the nuclear-norm regularization term. By adding a standard 2-norm data fidelity term to it, we obtain our proposed nuclear-norm model. Alternating direction method of multipliers can then be used to solve the model. Comparison of our method with other models on synthetic and real video clips shows that our resulting images are more accurate with less artifacts. It also provides much finer and discernable details. Joint work with Rui Zhao. Research supported by HKRGC.
VMVW02 3rd November 2017
12:00 to 12:50
Mihaela Pricop-jeckstadt From spatial learning to machine learning: an unsupervised approach with applications to behavioral science
In this talk we consider an example-driven approach for identifying ability patterns from data partitions based on learning behaviour in the water maze experiment. A modification of the k-means algorithm for longitudinal data as introduced in [1] is used to identify clusters based on the learning variable (see [3]). The association between these clusters and the flying ability variables is statistically tested in order to characterize the partitions in terms of flying traits. Since the learning variables seem to reflect flying abilities, we propose a new sparse clustering algorithm in an approach modelling the covariance matrix by a Kronecker product. Consistency and an EM-algorithm are studied in this framework also. References: 1. Genolini, C.; Ecochard, R.; Benghezal, M. et al., ''kmlShape: An Efficient Method to Cluster Longitudinal Data (Time-Series) According to Their Shapes'', PLOS ONE; Vol. 11, 2016.2. Sung, K.K. and Poggio, T., ''Example-based learning for view-based human face detection''; IEEE Transactions on pattern analysis and machine intelligence, Vol. 20, 39-51, 1998.; 3. Rosipal, R. and Kraemer, N. , ''Overview and recent advances in partial least squares'',  Subspace, latent structure and feature selection,  Book Series: Lecture Notes in Computer Science; Vol. 3940, 34-51, 2006.
VMVW02 3rd November 2017
14:00 to 14:50
Robert Plemmons Sparse Recovery Algorithms for 3D Imaging using Point Spread Function Engineering
Co-authors: Chao Wang (Mathematics, Chinese University of Hong Kong), Raymond Chan (Mathematics, Chinese University of Hong Kong), Sudhakar Prasad (Physics, University of New Mexico)

Imaging and localizing point sources with high accuracy in a 3D volume is an important but challenging task. For example, super-resolution 3D single molecule localization is an area of intense interest in biology (cell imaging, folding, membrane behavior, etc.), in chemistry (spectral diffusion, molecular distortions, etc.), and in physics (structures of materials, quantum optics, etc.). We consider here the high-resolution imaging problem of 3D point source image recovery from 2D data using methods based on point spread function (PSF) design. The methods involve a new technique, recently patented by S. Prasad, for applying rotating point spread functions with a single lobe to obtain depth from defocus. The amount of rotation of the PSF encodes the depth position of the point source. The distribution of point sources is discretized on a cubical lattice where the indexes of nonzero entries represent the 3D locations of point sources. The values of these entries are the point source fluxes. Finding the locations and fluxes is a large-scale sparse 3D inverse problem and we have developed solution algorithms based on sparse recovery using non-convex optimization. Applications to high-resolution single molecule localization microscopy are described, as well as localization of space debris using a space-based telescope. Sparse recovery optimization methods, including the Continuous Exact L0 (CEL0) algorithm, are used in our numerical experiments.
VMVW02 3rd November 2017
14:50 to 15:40
Jeff Calder The weighted p-Laplacian and semi-supervised learning
Semi-supervised learning refers to machine learning algorithms that make use of both labeled data and unlabeled data for learning tasks. Examples include large scale nonparametric regression and classification problems, such as predicting voting preferences of social media users, or classifying medical images. In today's big data world, there is an abundance of unlabeled data, while labeled data often requires expert labeling and is expensive to obtain. This has led to a resurgence of semi-supervised learning techniques, which use the topological or geometric properties of large amounts of unlabeled data to aid the learning task. In this talk, I will discuss some new rigorous PDE scaling limits for existing semisupervised learning algorithms and their practical implications. I will also discuss how these scaling limits suggest new ideas for fast algorithms for semi-supervised learning.
VMV 8th November 2017
15:30 to 16:30
Kewei Zhang A convexity based method for approximation and interpolation of sampled functions
I will briefly introduce the notions of compensated convex transforms and their basic properties. We apply these transforms to define devices for approximating and interpolating sampled functions in Euclidean spaces. I will describe the Huasdorff stability property against samples and the error estimates for inpainting  given continuous or Lipschitz functions.  Prototype examples will also be presented and numerical experiments on applications to salt & pepper noise reduction, the level set reconstruction and image inpainting will also be illustrated. This is a joint work with Elaine Crooks and Antonio Orlando.

VMV 15th November 2017
15:00 to 16:00
Ya-xiang Yuan Monotone properties of Barzilai-Borwein Method
In Optimization, the classical steepest descent method performs poorly, converges linearly, and is badly affected by ill-conditioning. The Barzilai-Borwein (BB) method is a two-point step size gradient method, where the step size is derived from a two-point approximation to the secant equation underlying quasi-Newton. Pairing with non-monotone linear search, BB gradient methods work every well on general unconstrained differentiable problems. Though well known as a stepsize technique for the gradient method, however, one undesirable property of the BB method is nonmonotone. In this talk, we discuss some monotone properties of the BB method.

VMV 29th November 2017
15:30 to 16:30
Haider Ali Multi-Region Image Segmentation using Generalized Averages and One Level Set Function
I will briefly introduce the notions of generalized averages, their particular cases, analysis and level set representation. We apply these generalized averages to construct a general image data term. The properties of the general data term will also be discussed for multi-region image segmentation.  Few test results will be exhibited. Moreover, performance in selective segmentation will also be displayed. This is a joint work with Noor Badshah, Ke Chen, Gulzar Ali Khan and Nosheen.

VMV 5th December 2017
10:00 to 11:00
Joerg Polzehl Structural adaptation - a statistical concept for image denoising
Images are often characterized by their homogeneity structure,i.e., discontinuities and smoothness within homogeneous regions, and intensity distributions that depend on the image generating experiment. Structural adaptation employs such qualitative assumptions on the homogeneitystructure in a sequential multi-scale procedure that controls localbias and variance while implicitly recovering discontinuities. I'll discuss the basic principles of the procedure, the model dependent but data-independent selection of it's parameters by a propagation condition and its mainproperties. Generalizations include patch based procedures and methods for noise quantification.I'll use examples from 2D and 3D imaging, aswell as from diffusion MR (5D) and quantitative MR (multiple 3D)for illustration.

VMVW03 11th December 2017
10:00 to 11:00
Nir Sochen Point correspondences in the functional map framework
VMVW03 11th December 2017
11:30 to 12:30
Olga Veksler Convexity and Other Shape priors for Single and Multiple Object Segmentation
Shape is a useful regularization prior for image segmentation. First we will talk about convexity shape prior for single object segmentation. In the context of discrete optimization, object convexity is represented as a sum of 3-clique potentials penalizing any 1-0-1 configuration on all straight lines. We show that these nonsubmodular interactions can be efficiently optimized using a trust region approach. While the quadratic number of all 3-cliques is prohibitively high, we designed a dynamic programming technique for evaluating and approximating these cliques in linear time. Our experiments demonstrate general usefulness of the proposed convexity constraint on synthetic and real image segmentation examples. Unlike standard second order length regularization, our convexity prior is scale invariant, does not have shrinking bias, and is virtually parameter-free. Segmenting multiple objects with convex shape prior presents its own challenges as distinct objects interact in non-trivial manner. We extend our work on single convex object optimization by proposing a mutli-object convexity shape prior for multi-label image segmentation. Next we consider simple shape priors, i.e. priors that can be optimized exactly with a single graph cut in the context of single object segmentation. Segmenting multiple objects with such simple shape priors presents its own challenges. We propose a new class of energies for segmentation of multiple foreground objects with a common simple shape prior. Our energy involves infinity constraints. For such energies standard expansion algorithm has no optimality guarantees and in practice gets stuck in bad local minima. Therefore, we develop a new move making algorithm, we call double expansion. In contrast to expansion, the new move allows each pixel to choose a label from a pair of new labels or keep the old label. This results in an algorithm with optimality guarantees and robust performance in practice. We experiment with several types of shape prior such as star-shape, compactness and a novel symmetry prior, and empirically demonstrate the advantage of the double expansion.
VMVW03 11th December 2017
13:30 to 14:30
Emanuele Rodolà Spectral approaches to partial deformable 3D shape correspondence
In this talk we will present our recent line of work on (deformable) partial 3D shape correspondence in the spectral domain. We will first introduce Partial Functional Maps (PFM), showing how to robustly formulate the shape correspondence problem under missing geometry with the language of functional maps. We use perturbation analysis to show how removal of shape parts changes the Laplace-Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. We will show further extensions to deal with the presence of clutter (deformable object-in-clutter) and multiple pieces (non-rigid puzzles). In the second part of the talk, we will introduce a novel approach to the same problem which operates completely in the spectral domain, avoiding the cumbersome alternating optimization used in the previous approaches. This allows matching shapes with constant complexity independent of the number of shape vertices, and yields state-of-the-art results on challenging correspondence benchmarks in the presence of partiality and topological noise. Authors: E. Rodola, L. Cosmo, O. Litany, J. Masci, A. Bronstein, M. Bronstein, A. Torsello, D. Cremers
VMVW03 11th December 2017
14:30 to 15:30
Wei Zhu Euler's elastica based segmentation models and the fast algorithms
In this talk, we will discuss two image segmentation models that employ L^1 and L^2 Euler's elastica respectively as the regularization of active contour. When compared with the conventional contour length based regularization, these high order regularizations lead to new features, including connecting broken parts of objects automatically and being well-suited for fine elongate structures. More interestingly, with the L^1 Euler's elastica as the contour regularization, the segmentation model is able to single out objects with convex shapes. We will also discuss the fast algorithms for dealing with these models by using augmented Lagrangian method. Numerical experiments will be presented to illustrate the features of these Euler's elastica based segmentation models.
VMVW03 11th December 2017
16:00 to 17:00
Andrew Zisserman 3D Shape Inference from Images using Deep Learning
The talk will cover two approaches to obtaining 3D shape from images. First, we introduce a deep Convolutional Neural Network (ConvNet) architecture that can generate depth maps given a single or multiple images of an object. The ConvNet is trained using a prediction loss on both the depth map and the silhouette. Using a set of sculptures as our 3D objects, we show that the ConvNet is able to generalize to new objects, unseen during training, and that its performance improves given more input views of the object. This is joint work with Olivia Wiles. Second, we use ConvNets to infer 3D shape attributes, such as planarity, symmetry and occupied space, from a single image. For this we have assembled an annotated dataset of 150K images of over 2000 different sculptures. We show that 3D attributes can be learnt from these images and generalize to images of other (non-sculpture) object classes. This is joint work with Abhinav Gupta and David Fouhey.
VMVW03 12th December 2017
09:00 to 10:00
Laurent Younes Riemannian Diffeomorphic Mapping and Some Applications
We review a few applications of large deformation diffeomorphic metric mapping and some of its variants, within a sub-Riemannian framework in diffeomorphism groups and shape spaces. After describing the basic principles, the talk will focus on applications in the construction of laminar coordinates in the cortical ribbon, on the quantification of fine motor tasks in children through letter tracing and on the statistical estimation of a changepoint in brain shape evolution for Alzheimer’s disease.
VMVW03 12th December 2017
10:00 to 11:00
Lok Ming Lui Recent advances of Computational Quasiconformal Geometry in Imaging, Graphics and Visions
Computational quasiconformal geometry (CQC) has recently attracted much attention and found successful applications in various fields, such as imaging, computer graphics and visions. In this talk, I will give an overview on the recent advances of CQC. More specifically, I will talk about how quasiconformal structures can be efficiently and accurately computed on different surface representations, such as meshes and point clouds. Applications of CQC in medical imaging and visions will also be discussed. Finally, the possibility to extend CQC to higher dimensions will also be examined.
VMVW03 12th December 2017
11:30 to 12:30
Lourdes Agapito Capturing 3D models of deformable objects from monocular sequences
VMVW03 12th December 2017
13:30 to 14:30
Maks Ovsjanikov Efficient regularization of functional map computations
In this talk, I will give a brief overview of the functional map framework and then describe some recent approaches that allow to incorporate both geometric and topological constraints into functional map computations. Namely, I will discuss a method to obtain functional maps that follow structural properties of pointwise correspondences, ways to encode embedding-dependent (second fundamental form) information and finally a technique to efficiently compute bi-directional correspondences using functional map adjoints.
VMVW03 12th December 2017
14:30 to 15:30
Weihong Guo Simultaneous Image Segmentation and Registration and Applications
Image segmentation and registration play active roles in machine vision and image analysis. In particular, image registration helps segmenting images when they have low contrast and/or partial missing information. We explore the joint problem of segmenting and registering a template (e.g. current) image given a reference (e.g. past) image. We solve the joint problem by minimizing a functional that integrates Geodesic Active Contours and Nonlinear Elastic registration. The template image is modeled as a hyper-elastic material (St. Venant-Kirchho model) which undergoes deformations under applied forces. To segment the deforming template, a two-phase level set based energy is introduced together with a weighted total variation term that depends on gradient features of the deforming template. This particular choice allows for fast solution using the dual formulation of the total variation. This allows the segmenting front to accurately track spontaneous changes in the shape of objects embedded in the template image as it deforms. To solve the underlying registration problem we use gradient descent and adopt an implicit-explicit method and use the Fast Fourier Transform. This is a joint work with former PhD student Thomas Atta-Fosu.
VMVW03 12th December 2017
16:00 to 17:00
Carl Olsson Compact Rank Models and Optimization
VMVW03 13th December 2017
09:00 to 10:00
Yuri Boykov Low-order graphical models for shapes and hierarchies in segmentation
This talks discusses simple (low-order) graphical models imposing practically powerful constraints on shapes and hierarchical structure of segments in the context of binary and multi-object labeling of images. We discuss properties and optimization for generic shape priors like "star", geodesic star, hedgehog, as well as models for partially-ordered labeling of interacting objects. While the talk focuses on biomedical applications where structural constraints (shapes and hierarchy) come from anatomy, the discussed general graphical models are useful for semi-supervised computer vision problems. Related papers appeared at CVPR 2017, 2016, ICCV 2009, 2005, ECCV 2008.
VMVW03 13th December 2017
10:00 to 11:00
Tieyong Zeng Two-Stage/Three-Stage Method for Image Segmentation
The Mumford–Shah model is one of the most important image segmentation models and has been studied extensively in the last twenty years. In this talk, we propose a two-stage segmentation method based on the Mumford–Shah model. The first stage of our method is to find a smooth solution to a convex variant of the Mumford–Shah model. In the second stage the segmentation is done by thresholding the previous image into different phases. Experimental results show the good performance of the proposed method. The idea is then generalized for image segmentation under non-Gaussian noise, color image segmentation and selective image segmentation for medical images.
VMVW03 13th December 2017
11:30 to 12:30
Stephen Marsland Langevin equations for landmark image registration with uncertainty
Co-author: Tony Shardlow (University of Bath)

Pairs of images can be brought into alignment (registered) by finding corresponding points on the two images and deforming one of them so that the points match. This can be carried out as a Hamiltonian boundary-value problem, and then provides a diffeomorphic registration between images. However, small changes in the positions of the landmarks can produce large changes in the resulting diffeomorphism. We formulate a Langevin equation for looking at small random perturbations of this registration. The Langevin equation and three computationally convenient approximations are introduced and used as prior distributions. A Bayesian framework is then used to compute a posterior distribution for the registration, and also to formulate an average of multiple sets of landmarks.
VMVW03 14th December 2017
09:00 to 10:00
Peter Michor Soliton solutions for the elastic metric on spaces of curves
Joint work with: Martin Bauer (Florida State University), Martins Bruveris (Brunel University London), Philipp Harms (University of Freiburg). Abstract: Some first order Sobolev metrics on spaces of curves admit soliton-like geodesics, i.e., geodesics whose momenta are sums of delta distributions. It turns out that these geodesics can be found within the submanifold of piecewise linear curves, which is totally geodesic for these metrics. Consequently, the geodesic equation reduces to a finite-dimensional ordinary differential equation for a dense set of initial conditions.
VMVW03 14th December 2017
10:00 to 11:00
Gabriel Peyre Optimal Transport and Deep Generative Models
Co-authors: Marco Cuturi (ENSAE), Aude Genevay (ENS)

In this talk, I will review some recent advances on deep generative models through the prism of Optimal Transport (OT). OT provides a way to define robust loss functions to perform high dimensional density fitting using generative models. This defines so called Minimum Kantorovitch Estimators (MKE) [1]. This approach is especially useful to recast several unsupervised deep learning methods in a unifying framework. Most notably, as shown respectively in [2,3] (and reviewed in [4]) Variational Autoencoders (VAE) and Generative Adversarial Networks (GAN) can be interpreted as (respectively primal and and dual) approximate MKE. This is a joint work with Aude Genevay and Marco Cuturi.

References: [1] Federico Bassetti, Antonella Bodini, and Eugenio Regazzini. On minimum Kantorovich distance estimators. Statistics & probability letters, 76(12):1298–1302, 2006. [2] Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Carl-Johann Simon-Gabriel, and Bernhard Schoelkopf. From optimal transport to generative modeling: the VEGAN cookbook. Arxiv:1705.07642, 2017. [3] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. Arxiv:1701.07875, 2017. [4] Aude Genevay, Gabriel Peyré, Marco Cuturi, GAN and VAE from an Optimal Transport Point of View, Arxiv:1706.01807, 2017

VMVW03 14th December 2017
11:30 to 12:30
Alex Bronstein Geometry and learning in 3D correspondence problems
The need to compute correspondence between three-dimensional objects is a fundamental ingredient in numerous computer vision and graphics tasks. In this talk, I will show how several geometric notions related to the Laplacian spectrum provide a set of tools for efficiently calculating correspondence between deformable shapes. I will also show how this framework combined with recent ideas in deep learning promises to bring correspondence problems to new levels of accuracy.
VMVW03 14th December 2017
13:30 to 14:30
John Ball Nonlinear elasticity and image processing
A survey of nonlinear elasticity will be given with a view to possible applications in image processing. Then some particular image processing issues arising from recent experiments on low hysteresis alloys will be described.
VMVW03 14th December 2017
14:30 to 15:30
Christopher Zach When to lift (a function to higher dimensions) and when not
In the first part of my talk I will describe several instances where reformulating a difficult optimization problem into higher dimensions (i.e. enlarge the set of minimized variables) is beneficial. My particular interest are robust cost functions e.g. utilized for correspondence search, which serve as a prototype for general difficult minimization problems. In the second part I will describe problem instances of relevance especially in 3D computer vision, where reducing the set of involved variables (i.e. the opposite of lifting) is highly beneficial. In particular, I will clarify the relationship between variable projection methods and the Schur complement often employed in Gauss-Newton based algorithms. Joint work with Je Hyeong Hong and Andrew Fitzgibbon.
VMVW03 14th December 2017
16:00 to 17:00
Darryl Holm Stochastic Metamorphosis in Imaging Science
In the pattern matching approach to imaging science, the process of metamorphosis in template matching with dynamical templates was introduced in [7]. In [5] the metamorphosis equations of [7] were recast into the Euler-Poincar ́e variational framework of [4] and shown to contain the equations for a perfect complex fluid [3].

This result related the data structure underlying the process of metamorphosis in image matching to the physical concept of order parameter in the theory of complex fluids [2]. In particular, it cast the concept of Lagrangian paths in imaging science as deterministically evolving curves in the space of diffeomorphisms acting on image data structure, expressed in Eulerian space. In contrast, the landmarks in the standard LDDMM approach are Lagrangian.

For the sake of introducing an Eulerian uncertainty quantification approach in imaging science, we extend the method of metamorphosis to apply to image matching along stochastically evolving time dependent curves on the space of diffeomorphisms. The approach IS guided by recent progress in developing stochastic Lie transport models for uncertainty quantification in fluid dynamics in [6, 1].

[1] D. O. Crisan, F. Flandoli, and D. D. Holm. Solution properties of a 3D stochastic Euler fluid equation. arXiv preprint arXiv:1704.06989, 2017. URL https://arxiv.org/abs/1704.06989.
[2] F. Gay-Balmaz, D. D. Holm, and T. S. Ratiu. Geometric dynamics of optimization. Comm. in Math. Sciences, 11(1):163–231, 2013. [3] D. D. Holm. Euler-Poincaré dynamics of perfect complex fluids. In P. Newton, P. Holmes, and A. Weinstein, editors, Geometry, Mechanics, and Dynamics: in honor of the 60th birthday of Jerrold E. Marsden, pages 113–167. Springer, 2002. [4] D. D. Holm, J. E. Marsden, and T. S. Ratiu. The Euler–Poincar ́e equations and semidirect products with applications to continuum theories. Adv. in Math., 137:1–81, 1998. [5] D. D. Holm, A. Trouvé, and L. Younes. The Euler-Poincar ́e theory of metamorphosis. Quarterly of Applied Mathematics, 67:661–685, 2009. [6] Darryl D Holm. Variational principles for stochastic fluid dynamics. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 471(2176):20140963, 2015. [7] A. Trouvé and L. Younes. Metamorphoses through Lie group action. Found. Comp. Math., 173–198, 2005.
VMVW03 15th December 2017
09:00 to 10:00
Zachary Boyd Formulations of community detection in terms of total variation and surface tension
Network data structures arise in numerous applications, e.g. in image segmentation when graph cut methods are used or in the form of a similarity graph on the pixels in certain clustering methods. Networks also occur as social, biological, technological, and transportation networks, for instance, all of which are receiving a lot of attention right now. "Community detection" is a body of techniques for extracting large- and medium-scale structure from such graphs. Most community detection formalizations turn out to be NP-hard and in practice are horrendously nonconvex. Practitioners from many fields are struggling to find formulations that (1) helpfully summarize the network data and (2) are computationally tractable. Most formulations have neither property. In my talk, I will give two examples of how existing community detection models can be understood in terms of objects familiar in image processing. The first example casts the popular modularity heuristic as a graph total variation problem with a soft area balance constraint. The second views the more flexible stochastic block model as a discrete surface tension minimization problem, which in the two-community case is exactly equivalent to the first example. These equivalences can potentially benefit both the network science community and the image processing community by allowing tools from one domain to be applied to the other. As an example, I show how mean curvature flow, phase field, and threshold dynamics approaches to continuum total variation minimization can be adapted to community detection in graphs, including nonlocal means graphs for hyperspectral images and videos. The positive results hint that the methods commonly used in image processing can be readily applied to much more general problems involving arbitrary graph structures. I will also mention some possible future work in the reverse direction, where I would like to bring methods from the network science literature into image processing. This is joint work with Egil Bae, Andrea Bertozzi, Mason Porter, and Xue-Cheng Tai.
VMVW03 15th December 2017
10:00 to 11:00
Elaine Crooks Compensated convexity, multiscale medial axis maps, and sharp regularity of the squared distance function
Co-authors: Kewei Zhang (University of Nottingham, UK), Antonio Orlando (Universidad Nacional de Tucuman, Argentina)

Compensated convex transforms enjoy tight-approximation and locality properties that can be exploited to develop multiscale, parametrised methods for identifying singularities in functions. When applied to the squared distance function to a closed subset of Euclidean space, these ideas yield a new tool for locating and analyzing the medial axis of geometric objects, called the multiscale medial axis map. This consists of a parametrised family of nonnegative functions that provides a Hausdorff-stable multiscale representation of the medial axis, in particular producing a hierarchy of heights between different parts of the medial axis depending on the distance between the generating points of that part of the medial axis. Such a hierarchy enables subsets of the medial axis to be selected by simple thresholding, which tackles the well-known stability issue that small perturbations in an object can produce large variations in the corresponding medial axis. A sharp regularity resu lt for the squared distance function is obtained as a by-product of the analysis of this multiscale medial axis map.

This is joint work with Kewei Zhang (Nottingham) and Antonio Orlando (Tucuman).
VMVW03 15th December 2017
11:30 to 12:30
François-Xavier Vialard An interpolating distance between Wasserstein and Fisher-Rao
In this talk, we present the natural extension of the Wasserstein metric to the space of positive Radon measures. We present the dynamic formulation and we show its associated static formulation. Then, we relate this new metric to the Camassa-Holm equation and show that this Camassa-Holm equation is actually an incompressible Euler equation in higher dimensions. We also present some applications of this new metric as a similarity measure in inverse problems in imaging.