# Timetable (VMVW01)

## Variational methods, new optimisation techniques and new fast numerical algorithms

Monday 4th September 2017 to Friday 8th September 2017

 09:00 to 09:50 Antonin Chambolle (CNRS (Centre national de la recherche scientifique)); (École Polytechnique)Minimization of curvature dependent functional. In this joint work with T. Pock (TU Graz, Austria) we present a relaxation of line energies which depend on the curvature, such as the elastica functional, introduced in particular to complete contours in image inpainting problems. Our relaxation is convex and tight on C^2 curves. INI 1 09:50 to 10:40 Ke Chen (University of Liverpool)Fractional Order Derivatives Regularization: Models, Algorithms and Applications In variational imaging and other inverse problem modeling, regularisation plays a major role.In recent years, high order regularizers such as the mean curvature, the Gaussian curvature and Euler's elastica are increasingly studied and applied, and many impressive results over the widely-used gradient based models are reported.Here we present some results from studying another class of high and non-integer order regularisers based on fractional order derivatives and focus on two aspects of this class of models:(i) theoretical analysis and advantages; (ii) efficient algorithms.We found that models with regularization by fractional order derivatives are convex in a suitable space and algorithms exploiting structured matrices can be employed to design efficient algorithms.Applications to restoration and registration are illustrated. This opens many opportunities to apply these regularisers to a wide class of imaging problems.Ke Chen and J P Zhang, EPSRC Liverpool Centre for Mathematics in Healthcare,Centre for Mathematical Imaging Techniques,   and Department of Mathematical Sciences,The University of Liverpool,United Kingdom[ http://tinyurl.com/EPSRC-LCMH ] INI 1 10:40 to 11:10 Morning Coffee 11:10 to 12:00 Michael Ng (Hong Kong Baptist University)Tensor Data Analysis: Models and Algorithms In this talk, we discuss some models and algorithms for tensor data analysis. Examples in imaging sciences are presented to illustrate the results of the proposed models and algorithms. INI 1 12:00 to 12:50 Kristian Bredies (University of Graz)Preconditioned and accelerated Douglas-Rachford algorithms for the solution of variational imaging problems Co-author: Hongpeng Sun (Renmin University of China) We present preconditioned and accelerated versions of the Douglas-Rachford (DR) splitting method for the solution of convex-concave saddle-point problems which often arise in variational imaging. The methods enable to replace the solution of a linear system in each iteration step in the corresponding DR iteration by approximate solvers without the need of controlling the error. These iterations are shown to converge in Hilbert space under minimal assumptions on the preconditioner and for any step-size. Moreover, ergodic sequences associated with the iteration admit at least a convergence rate in terms of restricted primal-dual gaps. Further, strong convexity of one or both of the involved functionals allow for acceleration strategies that yield improved rates of and for , respectively. The methods are applied to non-smooth and convex variational imaging problems. We discuss denoising and deconvolution with and discrepancy and total variation (TV) as well as total generalized variation (TGV) penalty. Preconditioners which are specific to these problems are presented, the results of numerical experiments are shown and the benefits of the respective preconditioned iterations are discussed. INI 1 12:30 to 18:00 Computational Challenges in Image Processing - http://www.turing-gateway.cam.ac.uk/event/ofbw32 12:50 to 14:00 Buffet Lunch at INI
 09:00 to 09:50 Laurent Cohen (CNRS & Université Paris-Dauphine)Geodesic Methods for Interactive Image Segmentation using Finsler metrics Minimal paths have been used for long as an interactive tool to find edges or tubular structures as cost minimizing curves. The user usually provides start and end points on the image and gets the minimal path as output. These minimal paths correspond to minimal geodesics according to some adapted metric. They are a way to find a (set of) curve(s) globally minimizing the geodesic active contours energy. Finding a geodesic distance can be solved by the Eikonal equation using the fast and efficient Fast Marching method. Different metrics can be adapted to various problems. In the past years we have introduced different extensions of these minimal paths that improve either the interactive aspects or the results. For example, the metric can take into account both scale and orientation of the path. This leads to solving an anisotropic minimal path in a 2D or 3D+radius space. We recently introduced the use of Finsler metrics allowing to take into account the local curvature in order to smooth the path. It can also be adapted to take into account a region term inside the closed curve formed by a set of minimal geodesics.  Co-authors: Da Chen and J.-M. Mirebeau INI 1 09:50 to 10:40 Tom Goldstein (University of Maryland, College Park)Automating stochastic gradient methods with adaptive batch sizes This talk will address several issues related to training neural networks using stochastic gradient methods.  First, we'll talk about the difficulties of training in a distributed environment, and present a new method called centralVR for boosting the scalability of training methods.  Then, we'll talk about the issue of automating stochastic gradient descent, and show that learning rate selection can be simplified using "Big Batch" strategies that adaptively choose minibatch sizes. INI 1 10:40 to 11:10 Morning Coffee 11:10 to 12:00 Vladimir Kolmogorov (Institute of Science and Technology (IST Austria))Valued Constraint Satisfaction Problems I will consider the Valued Constraint Satisfaction Problem (VCSP), whose goal is to minimize a sum of local terms where each term comes from a fixed set of functions (called a "language") over a fixed discrete domain. I will present recent results characterizing languages that can be solved using the basic LP relaxation. This includes languages consisting of submodular functions, as well as their generalizations. One of such generalizations is k-submodular functions. In the second part of the talk I will present an application of such functions in computer vision. Based on joint papers with Igor Gridchyn, Andrei Krokhin, Michal Rolinek, Johan Thapper and Stanislav Zivny. INI 1 12:00 to 12:50 Sung Ha Kang (Georgia Institute of Technology)Efficient numerical Methods For Variational inpainting models Co-authors: Maryam Yashtini (Georgia Institute of Technology), Wei Zhu (The University of Alabama) Recent developments of fast algorithms, based on operator splitting, augmented Lagrangian, and alternating minimization, enabled us to revisit some of the variational image inpainting models. In this talk, we will present some fast algorithms for Euler's Elastica image inpainting model, and variational edge-weighted image colorization model based on chromaticity and brightness models. Main ideas of the models and algorithms, some analysis and numerical results will be presented. INI 1 12:50 to 14:00 Lunch @ Wolfson Court 14:00 to 14:50 Jalal Fadili (None / Other)Sensitivity Analysis with Degeneracy: Mirror Stratifiable Functions This talk will present a set of sensitivity analysis and activity identification results for a class of convex functions with a strong geometric structure, that we coin mirror-stratifiable''. These functions are such that there is a bijection between a primal and a dual stratification of the space into partitioning sets, called strata. This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms. This class of functions encompasses all regularizers routinely used in signal and image processing, machine learning, and statistics. We show that this mirror-stratifiable'' structure enjoys a nice sensitivity theory, allowing us to study stability of solutions of optimization problems to small perturbations, as well as activity identification of first-order proximal splitting-type algorithms. Existing results in the literature typically assume that, under a non-degeneracy condition, the active set associated to a minimizer is stable to small perturbations and is identified in finite time by optimization schemes. In contrast, our results do not require any non-degeneracy assumption: in consequence, the optimal active set is not necessarily stable anymore, but we are able to track precisely the set of identifiable strata. We show that these results have crucial implications when solving challenging ill-posed inverse problems via regularization, a typical scenario where the non-degeneracy condition is not fulfilled. Our theoretical results, illustrated by numerical simulations,  allow to characterize the instability behaviour of the regularized solutions, by locating the set of all low-dimensional strata that can be potentially identified by these solutions.This is a joint work with Jérôme Malick and Gabriel Peyré. INI 1 14:50 to 15:40 Zuoqiang Shi (Tsinghua University)Low dimensional manifold model for image processing In this talk, I will introduce a novel low dimensional manifold model for image processing problem. This model is based on the observation that for many natural images, the patch manifold usually has low dimension structure. Then, we use the dimension of the patch manifold as a regularization to recover the original image. Using some formula in differential geometry, this problem is reduced to solve Laplace-Beltrami equation on manifold. The Laplace-Beltrami equation is solved by the point integral method. Numerical tests show that this method gives very good results in image inpainting, denoising and super-resolution problem. This is joint work with Stanley Osher and Wei Zhu. INI 1 15:40 to 16:10 Afternoon Tea 16:10 to 17:00 Gabriele Steidl (University of Kaiserslautern)Convex Analysis in Hadamard Spaces joint work with M. Bacak, R. Bergmann, M. Montag and J. PerschThe aim of the talk is two-fold: 1. A well known result of H. Attouch states that the Mosco convergence of a sequence of proper convex lower semicontinuous functions defined on a Hilbert space is equivalent to the pointwise convergence of the associated Moreau envelopes. In the present paper we generalize this result to Hadamard spaces. More precisely, while it has already been known that the Mosco convergence of a sequence of convex lower semicontinuous functions on a Hadamard space implies the pointwise convergence of the corresponding Moreau envelopes, the converse implication was an open question. We now fill this gap.  Our result has several consequences. It implies, for instance, the equivalence of the Mosco and Frolik-Wijsman convergences of convex sets. As another application, we show that there exists a~complete metric on the cone of proper convex lower semicontinuous functions on a separable Hadamard space such that a~sequence of functions converges in this metric if and only if it converges in the sense of Mosco. 2. We extend the parallel Douglas-Rachford algorithm  to the manifold-valued setting. INI 1 19:30 to 22:00 Formal Dinner at Emmanuel College
 09:00 to 09:50 Xue-Cheng Tai (Hong Kong Baptist University)Fast Algorithms for Euler´s Elastica energy minimization and applications This talk is divided into three parts. In the first part, we will introduce the essential ideas in using Augmented Lagrangian/operator-splitting techniques for fast numerical algorithms for minimizing Euler's Elastica energy. In the 2nd part, we consider an Euler's elastica based image segmentation model. An interesting feature of this model lies in its preference of convex segmentation contour. However, due to the high order and non-differentiable term, it is often nontrivial to minimize the associated functional. In this work, we propose using augmented Lagrangian method to tackle the minimization problem. Especially, we design a novel augmented Lagrangian functional that deals with the mean curvature term differently as those ones in the previous works. The new treatment reduces the number of Lagrange multipliers employed, and more importantly, it helps represent the curvature more effectively and faithfully. Numerical experiments validate the efficiency of the proposed augmented Lagrangian method and also demonstrate new features of this particular segmentation model, such as shape driven and data driven properties. In the 3rd part, we will introduce some recent fast algorithms for minimizing Euler's elastica energy for interface problems. The method combine level set and binary representations of interfaces. The algorithm only needs to solve an Rodin-Osher-Fatemi problem and a re-distance of the level set function to minimize the elastica energy. The algorithm is easy to implement and fast with efficiency. The content of this talk is based joint works with Egil Bae, Tony Chan, Jinming Duan and Wei Zhu. Related links: 1) ftp://ftp.math.ucla.edu/pub/camreport/cam17-36.pdf 2) https://www.researchgate.net/profile/Xue_Cheng_Tai/publication/312519936_Augmented_Lagrangian_method_for_an_Euler's_elastica_based_segmentation_model_that_promotes_convex_contours/links/58a1b9d292851c7fb4c1907f/Augmented-Lagrangian-method-for-an-Eulers-elastica-based-segmentation-model-that-promotes-convex-contours.pdf 3) https://www.researchgate.net/publication/257592616_Image_Segmentation_Using_Euler%27s_Elastica_as_the_Regularization. INI 1 09:50 to 10:40 Thomas Pock (Graz University of Technology)End-to-end learning of CNN features in in discrete optimization models for motion and stereo Co-authors: Patrick Knöbelreiter (Graz University of Technology), Alexander Shekhovtsov (Technical University of Prague), Gottfried Munda (Graz University of Technology), Christian Reinbacher (Amazon) For many years, discrete optimization models such as conditional random fields (CRFs) have defined the state-of-the-art for classical correspondence problems such as motion and stereo. One of the most important ingredients in those models is the choice of the feature transform that is used to compute the similarity between images patches. For a long time, hand crafted features such as the celebrated scale invariant feature transform (SIFT) defined the state-of-the-art. Triggered by the recent success of convolutional neural networks (CNNs), it is quite natural to learn such a feature transform from data. In this talk, I will show how to efficiently learn such CNN features from data using an end-to-end learning approach. It turns out that our learned models yields state-of-the-art results on a number of established benchmark databases.Related Linkshttps://arxiv.org/pdf/1707.06427 - Scalable Full Flow with Learned Binary Descriptorshttps://arxiv.org/pdf/1611.10229 - End-to-End Training of Hybrid CNN-CRF Models for Stereo INI 1 10:40 to 11:10 Morning Coffee 11:10 to 12:00 Dimitris Metaxas (Rutgers, The State University of New Jersey); (University of Toronto); (National Technical University of Athens)tba INI 1 12:00 to 12:50 Yiqiu Dong (Technical University of Denmark)Directional Regularization for Image Reconstruction In this talk, I will introduce a new directional regularization based on the total generalized variation (TGV), which is very useful for applications with strong directional information. I will show that it has the same essential properties as TGV. With automatic direction estimators, we demonstrate the improvement of using directional TGV compared to standard TGV. Numerical simulations are carried out for image restoration and  computed tomography reconstruction. INI 1 12:50 to 14:00 Lunch @ Wolfson Court 14:00 to 14:50 Michael Moeller (Universität Siegen)Sublabel-Accurate Relaxation of Nonconvex Energies In this talk I will present a convex relaxation technique for a particular class of energy functionals consisting of a pointwise nonconvex data term and a total variation regularization as frequently used in image processing and computer vision problems. The method is based on the technique of functional lifting in which the minimization problem is reformulated in a higher dimensional space in order to obtain a tighter approximation of the original nonconvex energy. INI 1 14:50 to 15:40 Audrey Repetti (Heriot-Watt University)Joint imaging and calibration using non-convex optimization Co-authors: Jasleen Birdi (Heriot Watt University), Yves Wiaux (Heriot Watt University) New generations of imaging devices aim to produce high resolution and high dynamic range images. In this context, the high dimensionality associated inverse problems can become extremely challenging from an algorithmic view point. In addition, the quality and accuracy of the reconstructed images often depend on the precision with which the imaging device has previously been calibrated. Unfortunately, calibration does not depend only on the device but may also rely on the time and on the direction of the acquisitions. This leads to the need of performing joint image reconstruction and calibration, and thus of solving non-convex blind deconvolution problems. We focus on the joint calibration and imaging problem in the context of radio-interferometric imaging in astronomy. In this case, the sparse images of interest can reach gigapixel or terapixel size, while the calibration variables consist of a large number of low resolution images related to each antenna of the telescope. To solve this problem, we leverage a block-coordinate forward-backward algorithm, specifically designed to minimize non-smooth non-convex and high dimensional objective functions. We demonstrate by simulation the performance of this first joint imaging and calibration method in radio-astronomy. INI 1 15:40 to 16:10 Afternoon Tea 16:10 to 17:00 Christian Clason (Universität Duisburg-Essen)Convex regularization of discrete-valued inverse problems We consider inverse problems where where a distributed parameter is known a priori to only take on values from a given discrete set. This property can be promoted in Tikhonov regularization with the aid of a suitable convex but nondifferentiable regularization term. This allows applying standard approaches to show well-posedness and convergence rates in Bregman distance. Using the specific properties of the regularization term, it can be shown that convergence (albeit without rates) actually holds pointwise. Furthermore, the resulting Tikhonov functional can be minimized efficiently using a semi-smooth Newton method. Numerical examples illustrate the properties of the regularization term and the numerical solution. This is joint work with Thi Bich Tram Do, Florian Kruse, and Karl Kunisch. INI 1
 09:00 to 09:50 Mila Nikolova (CNRS (Centre national de la recherche scientifique)); (ENS de Cachan)Alternating proximal gradient descent for nonconvex regularised problems with multiconvex coupling terms Co-author: Pauline Tan There has been an increasing interest in constrained nonconvex  regularized block multiconvex optimization problems. We introduce an  approach that effectively exploits the multiconvex structure of the coupling term and enables complex application-dependent regularization terms to be used. The proposed Alternating Structure-Adapted Proximal gradient descent algorithm enjoys simple well defined updates. Global convergence of the algorithm to a critical point is proved using the so-called Kurdyka-Lojasiewicz  property. What is more, we prove that a large class of useful objective functions obeying our assumptions are subanalytic and thus satisfy the Kurdyka-Lojasiewicz property. Finally, present an application of the algorithm to big-data air-born sequences of images. INI 1 09:50 to 10:40 Michael Unser (EPFL - Ecole Polytechnique Fédérale de Lausanne)Representer theorems for ill-posed inverse problems: Tikhonov vs. generalized total-variation regularization In practice, ill-posed inverse problems are often dealt with by introducing a suitable regularization functional. The idea is to stabilize the problem while promoting "desirable" solutions. Here, we are interested in contrasting the effect Tikhonov vs. total-variation-like regularization. To that end, we first consider a discrete setting and present two representer theorems that characterize the solution of general convex minimization problems subject to $\ell_2$ vs. $\ell_1$ regularization constraints. Next, we adopt a continuous-domain formulation where the regularization semi-norm is a generalized version of total-variation tied to some differential operator L. We prove that the extreme points of the corresponding minimization problem are nonuniform L-splines with fewer knots than the number of measurements. For instance, when L is the derivative operator, then the solution is piecewise constant, which confirms a standard observation and explains why the solution is intrinsically sparse. The powerful aspect of this characterization is that it applies to any linear inverse problem. INI 1 10:40 to 11:10 Morning Coffee 11:10 to 12:00 Pierre Weiss (Université de Toulouse)Estimation of linear operators from scattered impulse responses Co-authors: Paul Escande (Université de Toulouse), Jérémie Bigot (Université de Toulouse) In this talk, I will propose a variational method to reconstruct operators with smooth kernels from scattered and noisy impulse responses. The proposed approach relies on the formalism of smoothing in reproducing kernel Hilbert spaces and on the choice of an appropriate regularization term that takes the smoothness of the operator into account. It is numerically tractable in very large dimensions and yields a representation that can be used for achieving fast matrix-vector products. We study the estimator's robustness to noise and analyze its approximation properties with respect to the size and the geometry of the dataset. It turns out to be minimax optimal. We finally show applications of the proposed algorithms to reconstruction of spatially varying blur operators in microscopy imaging.Related Links INI 1 12:00 to 12:50 Olga Veksler (University of Western Ontario)Adaptive and Move Making Auxiliary Cuts for Binary Pairwise Energies Co-author: Lena Gorelick (University of Western Ontario) Many computer vision problems require optimization of binary non-submodular energies. In this context, local iterative submodularization techniques based on trust region (LSA-TR) and auxiliary functions (LSA-AUX) have been recently proposed. They achieve state-of-the-art-results on a number of computer vision applications. We extend the LSA-AUX framework in two directions. First, unlike LSA-AUX, which selects auxiliary functions based solely on the current solution, we propose to incorporate several additional criteria. This results in tighter bounds for configurations that are more likely or closer to the current solution. Second, we propose move-making extensions of LSA-AUX which achieve tighter bounds by restricting the search space. Finally, we evaluate our methods on several applications. We show that for each application at least one of our extensions significantly outperforms the original LSA-AUX. Moreover, the best extension of LSA-AUX is comparable to or better than LSA-TR on four out of six applications. INI 1 12:50 to 14:00 Lunch @ Wolfson Court 14:00 to 14:50 Thomas Vogt (Universität zu Lübeck)Optimal Transport-Based Total Variation for Functional Lifting and Q-Ball Imaging Co-Author: Jan Lellmann (Institute of Mathematics and Image Computing, University of Lübeck)One strategy in functional lifting is to consider probability measures on the label space of interest, which can be discrete or continuous. The considered functionals often make use of a total variation regularizer which, when lifted, allows for a dual formulation introducing a Lipschitz constraint. In our recent work, we proposed to use a similar formulation of total variation for the restoration of so-called Q-Ball images. In this talk, we present a mathematical framework for total variation regularization that is inspired from the theory of Optimal Transport and that covers all of the previous cases, including probability measures on discrete and continuous label spaces and on manifolds. This framework nicely explains the above-mentioned Lipschitz constraint and comes with a robust theoretical background. INI 1 14:50 to 15:40 Martin Holler (University of Graz)Total Generalized Variation for Manifold-valued Data Co-authors: Kristian Bredies (University of Graz), Martin Storath (University of Heidelberg), Andreas Weinmann (Darmstadt University of Applied Sciences) Introduced in 2010, the total generalized variation (TGV) functional is nowadays amongst the most successful regularization functionals for variational image reconstruction. It is defined for an arbitrary order of differentiation and provides a convex model for piecewise smooth vector-space data. On the other hand, variational models for manifold-valued data have become popular recently and many successful approaches, such as first- and second-order TV regularization, have been successfully generalized to this setting. Despite the fact that TGV regularization is, generally, considered to be preferable to such approaches, an appropriate extension for manifold-valued data was still missing. In this talk we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present concrete instances that fulfill the proposed axioms. We prove well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifold-valued data. INI 1 15:40 to 16:10 Afternoon Tea 16:10 to 17:00 Tammy Riklin raviv (Ben-Gurion University)Variational Methods to Image Segmentation In the talk I will present variational methods to image segmentation with application to brain MRI tissue classification. In particular I will present an `unconventinal'  use of the multinomial logistic regression function.The work is based on a joint work with Jacob Goldberger, Shiri Gordon and Boris Kodner. INI 1