skip to content
 

Timetable (ASCW03)

Approximation, sampling, and compression in high dimensional problems

Monday 17th June 2019 to Friday 21st June 2019

Monday 17th June 2019
09:20 to 09:40 Registration
09:40 to 09:50 Welcome from David Abrahams (Isaac Newton Institute)
09:50 to 10:40 Akram Aldroubi (Vanderbilt University); (Vanderbilt University)
Dynamical sampling and frames generated from powers of exponential operators
In this talk, I will give a brief review of the problem of frame generation from operator powers of exponentials acting on a set of vectors. I will discuss its relation to dynamical sampling, review some of the previous results and present several new ones.
INI 1
10:40 to 11:10 Morning Coffee
11:10 to 12:00 Denka Kutzarova (University of Illinois at Urbana-Champaign); (Bulgarian Academy of Sciences)
Transportation cost spaces on finite metric spaces
Transportation cost spaces are studied by several groups of researchers, for different reasons and under different names. The term Lipschitz-free spaces is commonly used in Banach space theory.
We prove that the transportation cost space on any finite metric space contains a large well-complemented subspace which is close to $\ell_1^n$.
We show that transportation cost spaces on large classes of recursively defined sequences of graphs are not uniformly isomorphic to $\ell_1^n$ of the corresponding dimensions. These classes contain well-known families of diamond graphs and Laakso graphs.
In the particular case of diamond graphs we prove that their cycle space is spanned by even levels of Haar functions. It is curious that the subspaces generated by all the even/odd levels of the Haar functions also appear in the study of quasi-greedy basic sequences in $L_1[0,1]$.
This research is joint with Stephen Dilworth and Mikhail Ostrovskii.
INI 1
12:00 to 13:30 Lunch at Churchill College
13:30 to 14:20 Albert Cohen (Université Pierre et Marie Curie Paris)
Optimal sampling for approximation on general domains
We consider the approximation of an arbirary function in any dimension from point samples. Approximants are picked from given or adaptively chosen finite dimensional spaces. Various recent works reveal that optimal approximations can be constructed at minimal sampling budget by least-squares methods with particular sampling measures. In this talk, we discuss strategies to construct these measures and their samples in the adaptive context and in general non-tensor-product multivariate domains.

INI 1
14:20 to 15:10 Peter Binev (University of South Carolina)
High Dimensional Approximation via Sparse Occupancy Trees
Adaptive domain decomposition is often used in finite elements methods for solving partial differential equations in low space dimensions. The adaptive decisions are usually described by a tree. Assuming that can find the (approximate) error for approximating a function on each element of the partition, we have shown that a particular coarse-to-fine method provides a near-best approximation. This result can be extended to approximating point clouds any space dimension provided that we have relevant information about the errors and can organize properly the data. Of course, this is subject to the curse of dimensionality and nothing can be done in the general case. In case the intrinsic dimensionality of the data is much smaller than the space dimension, one can define algorithms that defy the curse. This is usually done by assuming that the data domain is close to a low dimensional manifold and first approximating this manifold and then the function defined by it. A few years ago, together with Philipp Lamby, Wolfgang Dahmen, and Ron DeVore, we proposed a direct method (without specifically identifying any low dimensional set) that we called "sparse occupancy trees". The method defines a piecewise constant or linear approximation on general simplicial partitions. This talk considers an extension of this method to find a similar approximation on conforming simplicial partitions following an idea from a recent result together with Francesca Fierro and Andreas Veeser about near-best approximation on conforming triangulations.
INI 1
15:10 to 15:40 Afternoon Tea
15:40 to 16:30 Claire Boyer (Sorbonne Université); (ENS - Paris)
Representer theorems and convex optimization
We establish a general principle which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms. These atoms are identified with the extreme points and elements of the extreme rays of the regularizer level sets. As a side result, we characterize the minimizers of the total gradient variation. As an ongoing work, we will also study the geometry of the total gradient variation ball.
This is a joint work with Antonin Chambolle, Yohann De Castro, Vincent Duval, Frédéric de Gournay, and Pierre Weiss.
INI 1
16:30 to 17:30 Welcome Wine Reception at INI
Tuesday 18th June 2019
09:00 to 09:50 Simon Foucart  (Texas A&M University)
Functions of Few Coordinate Variables: Sampling Schemes and Recovery Algorithms
I will revisit in this talk the task of approximating multivariate functions that depend on only a few of their variables. The number of samples required to achieve this task to a given accuracy has been determined for Lipschitz functions several years ago. However, two questions of practical interest remain: can we provide an explicit sampling strategy and can we efficiently produce approximants? I will (attempt to) answer these questions under some additional assumptions on the target function. Firstly, if it is known to be linear, then the problem is exactly similar to the standard compressive sensing problem, and I will review some of recent contributions there. Secondly, if the target function is quadratic, then the problem connects to sparse phaseless recovery and to jointly low-rank and bisparse recovery, for which some results and open questions will be presented. Finally, if the target function is known to increase coordinatewise, then the problem reduces to group testing, from which I will draw the sought-after sampling schemes and recovery algorithms.
INI 1
09:50 to 10:40 Alexander Olevskii (Tel Aviv University)
Discrete translates in function spaces
Given a Banach function space on R^n,  does there exist a uniformly discrete set of translates of a single function, which spans the space? I'll present a survey on the problem and discuss recent results, joint with A.Ulanovskii.
INI 1
10:40 to 11:10 Morning Coffee
11:10 to 12:00 Olga Mula (Université Paris-Dauphine)
Optimal algorithms for state estimation using reduced models
INI 1
12:00 to 13:30 Lunch at Churchill College
13:30 to 14:20 Joaquim Ortega-Cerdà (Universitat de Barcelona)
A sequence of well-conditioned polynomials

We find an explicit sequence of polynomials of arbitrary degree with
small condition number. This solves a problem posed by Michael Shub and Stephen Smale in 1993.
This is joint work together with Carlos Beltran, Ujué Etayo and Jordi Marzo.

INI 1
14:20 to 15:10 Holger Rauhut (RWTH Aachen University)
Linear and one-bit compressive sensing with subsampled random convolutions
Compressive sensing predicts that sparse vectors can recovered from incomplete linear measurements with efficient algorithms in a stable way. While many theoretical results work with Gaussian random measurement matrices, practical applications usually demand for structure. The talk covers the particular case of structured random measurements defined via convolution with a random vector and subsampling (deterministic or random as well). We will give an overview on the corresponding theory and will cover also recent results concerning recovery from one-bit measurements arising in quantized compressive sensing.
Based on joint works with Felix Krahmer, Shahar Mendelson, Sjoerd Dirksen and Hans-Christian Jung.
INI 1
15:10 to 15:40 Afternoon Tea
15:40 to 16:30 Lukas Herrmann (ETH Zürich)
Quasi-Monte Carlo integration in uncertainty quantification of elliptic PDEs with log-Gaussian coefficients

Quasi-Monte Carlo (QMC) rules are suitable to overcome the curse of dimension in the numerical integration of high-dimensional integrands.
Also the convergence rate of essentially first order is superior to Monte Carlo sampling.
We study a class of integrands that arise as solutions of elliptic PDEs with log-Gaussian coefficients.
In particular, we focus on the overall computational cost of the algorithm.
We prove that certain multilevel QMC rules have a consistent accuracy and computational cost that is essentially of optimal order in terms of the degrees of freedom of the spatial Finite Element
discretization for a range of infinite-dimensional priors.
This is joint work with Christoph Schwab.

References:
[L. Herrmann, Ch. Schwab: QMC integration for lognormal-parametric, elliptic PDEs: local supports and product weights, Numer. Math. 141(1) pp. 63--102, 2019],
[L. Herrmann, Ch. Schwab: Multilevel quasi-Monte Carlo integration with product weights for elliptic PDEs with lognormal coefficients, to appear in ESAIM:M2AN],
[L. Herrmann: Strong convergence analysis of iterative solvers for random operator equations, SAM report, 2017-35, in review]

INI 1
Wednesday 19th June 2019
09:00 to 09:50 Andreas Seeger (University of Wisconsin-Madison)
Basis properties of the Haar system in various function spaces

We present recent results on the Haar system in Besov and Triebel-Lizorkin spaces, with an emphasis on endpoint results.
Joint work with Gustavo Garrigós and Tino Ullrich.

INI 1
09:50 to 10:40 Christoph Thiele (University of Bonn)
On singular Brascamp Lieb integrals
We will give an overview over singular
Brascamp Lieb integrals and discuss some
recent results based on recent arxiv
postings with Polona Durcik.
INI 1
10:40 to 11:10 Morning Coffee
11:10 to 12:00 Hans Feichtinger (University of Vienna); (University of Vienna)
Approximation of continuous problems in Fourier Analysis by finite dimensional ones: The setting of the Banach Gelfand Triple
When it comes to the constructive realization of operators arising in Fourier Analysis, be it the Fourier transform itself, or some convolution operator, or more generally an (underspread) pseudo-diferential operator it is natural to make use of sampled version of the ingredients. The theory around the Banach Gelfand Triple (S0,L2,SO') which is based on methods from Gabor and time-frequency analysis, combined with the
relevant function spaces (Wiener amalgams and modulation spaces) allows to provide what we consider the appropriate setting and possibly the starting point for qualitative as well as later on more quantitative
error estimates.

INI 1
12:00 to 13:30 Lunch at Churchill College
13:30 to 18:00 Free afternoon
19:30 to 22:00 Formal Dinner at Christ's College


LOCATION
Christ's College
St Andrew's St, Cambridge CB2 3BU

DRESS CODE

Smart casual

MENU  
 
Starter
Camembert Fritters with Grape & Pecan Nut Salad  
Main course
Braised Lamb Shank with Vegetables
Open Ravioli with Squash & Porcini Mushrooms (V)  
Dessert
Cambridge Burnt Cream with Strawberries or Raspberries



Thursday 20th June 2019
09:00 to 09:50 Sinan Güntürk (Courant Institute of Mathematical Sciences); (New York University)
Extracting bits from analog samples: in pursuit of optimality
Sampling theorems, linear or non-linear, provide the basis for obtaining digital representations of analog signals, but often it is not obvious how to quantize these samples in order to achieve the best rate-distortion trade off possible, especially in the presence of redundancy. We will present a general approach called "distributed beta encoding" which can achieve superior (and often near-optimal) rate-distortion performance in a wide variety of sampling scenarios. These will include some classical problems in the linear setting such as Fourier and Gabor sampling, and some others in the nonlinear setting, such as compressive sampling, spectral super-resolution, and phase retrieval.
INI 1
09:50 to 10:40 Peter Richtarik (University of Edinburgh); (King Abdullah University of Science and Technology (KAUST)); (Moscow Institute of Physics and Technology)
Stochastic Quasi-Gradient Methods: Variance Reduction via Jacobian Sketching
We develop a new family of variance reduced stochastic gradient descent methods for minimizing the average of a very large number of smooth functions. Our method—JacSketch—is motivated by novel de- velopments in randomized numerical linear algebra, and operates by maintaining a stochastic estimate of a Jacobian matrix composed of the gradients of individual functions. In each iteration, JacSketch efficiently updates the Jacobian matrix by first obtaining a random linear measurement of the true Jacobian through (cheap) sketching, and then projecting the previous estimate onto the solution space of a linear matrix equa- tion whose solutions are consistent with the measurement. The Jacobian estimate is then used to compute a variance-reduced unbiased estimator of the gradient, followed by a stochastic gradient descent step. Our strategy is analogous to the way quasi-Newton methods maintain an estimate of the Hessian, and hence our method can be seen as a stochastic q uasi-gradient method. Indeed, quasi-Newton methods project the current Hessian estimate onto a solution space of a linear equation consistent with a certain linear (but non-random) measurement of the true Hessian. Our method can also be seen as stochastic gradient descent applied to a controlled stochastic optimization reformulation of the original problem, where the control comes from the Jacobian estimates. We prove that for smooth and strongly convex functions, JacSketch converges linearly with a meaningful rate dictated by a single convergence theorem which applies to general sketches. We also provide a refined convergence theorem which applies to a smaller class of sketches, featuring a novel proof technique based on a stochastic Lyapunov function. This enables us to obtain sharper complexity results for variants of JacSketch with importance sampling. By specializing our general approach to specific sketching strategies, JacSketch reduces to the celebrated stochastic average gradient (SAGA) method, and

Co-authors: Robert Mansel Gower (Telecom ParisTech), Francis Bach (INRIA - ENS - PSL Research University)
INI 1
10:40 to 11:10 Morning Coffee
11:10 to 12:00 Boris Kashin (Steklov Mathematical Institute, Russian Academy of Sciences); (Russian Academy of Sciences)
On some theorems on the restriction of operator to coordinate subspace
INI 1
12:00 to 13:30 Lunch at Churchill College
13:30 to 14:20 Yuan Xu (University of Oregon)
Orthogonal structure in and on quadratic surfaces
Orthogonal structure in and on quadratic surfaces Text of abstract: Spherical harmonics are orthogonal polynomials on the unit sphere. They are eigenfunctions of the Laplace-Beltrami operator on the sphere and they satisfy an addition formula (a closed formula for their reproducing kernel). In this talk, we consider orthogonal polynomials on quadratic surfaces of revolution and inside the domain bounded by quadratic surfaces.  We will define orthogonal polynomials on the surface of a cone that possess both characteristics of spherical harmonics. In particular, the addition formula on the cone has a one-dimensional structure, which leads to a convolution structure on the cone useful for studying Fourier orthogonal series. Furthermore, the same narrative holds for orthogonal polynomials defined on the solid cones.
INI 1
14:20 to 15:10 Gitta Kutyniok (Technische Universität Berlin)
Beating the Curse of Dimensionality: A Theoretical Analysis of Deep Neural Networks and Parametric PDEs
High-dimensional parametric partial differential equations (PDEs) appear in various contexts including control and optimization problems, inverse problems, risk assessment, and uncertainty quantification. In most such scenarios the set of all admissible solutions associated with the parameter space is inherently low dimensional. This fact forms the foundation for the so-called reduced basis method.

Recently, numerical experiments demonstrated the remarkable efficiency of using deep neural networks to solve parametric problems. In this talk, we will present a theoretical justification for this class of approaches. More precisely, we will derive upper bounds on the complexity of ReLU neural networks approximating the solution maps of parametric PDEs. In fact, without any knowledge of its concrete shape, we use the inherent low-dimensionality of the solution manifold to obtain approximation rates which are significantly superior to those provided by classical approximation results. We use this low-dimensionality to guarantee the existence of a reduced basis. Then, for a large variety of parametric PDEs, we construct neural networks that yield approximations of the parametric maps not suffering from a curse of dimensionality and essentially only depending on the size of the reduced basis.

This is joint work with Philipp Petersen (Oxford), Mones Raslan, and Reinhold Schneider.
INI 1
15:10 to 16:10 Afternoon tea and poster session
Friday 21st June 2019
09:00 to 09:50 Aicke Hinrichs (Johannes Kepler Universität)
How good is random information compared to optimal information?
We study approximation and integration problems and compare the quality of optimal information with the quality of random information. For some problems random information is almost optimal and for other problems random information is much worse than optimal information. We give a survey about known and new results. Parts of the talk are based on joint work with D. Krieg, E. Novak, J. Prochno and M. Ullrich.
INI 1
09:50 to 10:40 Karlheinz Groechenig (University of Vienna)
Totally positive functions in sampling theory and time-frequency analysis
Totally positive functions play an important role in approximation theory and statistics. In this talk I will present recent new applications of totally positive functions (TPFs) in sampling theory and time-frequency analysis.   (i) We study the sampling problem for shift-invariant spaces generated by a TPF. These spaces arise the span of the integer shifts of a TPF and are often used as a substitute for bandlimited functions.   We give a complete characterization of sampling sets for a shift-invariant space with a TPF generator of Gaussian type in the style of Beurling.   (ii) A related problem is the question of Gabor frames, i.e., the spanning properties of time-frequency shifts of a given function. It is conjectured that the lattice shifts of a TPF generate a frame, if and only if the density of the lattice  exceeds 1. At this time this conjecture has been proved  for two important subclasses of TPFs. For  rational lattices it is true for arbitrary TPFs.  So far, TPFs seem to be the only window functions for which the fine structure of the associated Gabor  frames is tractable.   (iii) Yet another question in time-frequency analysis is the existence of zeros of the Wigner distribution (or the radar ambiguity function). So far all examples of zero-free ambiguity functions are related to TPFs, e.g., the ambiguity function of the Gaussian is zero free.
INI 1
10:40 to 11:10 Morning Coffee
11:10 to 12:00 Feng Dai (University of Alberta)
Integral norm discretization and related problems
In this talk, we will discuss the problem of replacing an integral norm with respect to a given probability measure by the corresponding integral
norm with respect to a discrete measure. We study the problem for elements of finite dimensional spaces in a general setting, paying a
special attention to the case of the multivariate trigonometric polynomials with frequencies from a finite set with fixed cardinality.
Both new results and a survey of known results will be presented.

This is a joint work with A. Prymak, V.N. Temlyakov and S. Tikhonov.
INI 1
12:00 to 13:30 Lunch at Churchill College
13:30 to 14:20 Milana Gataric (University of Cambridge)
Imaging through optical fibres
In this talk, I'll present some recent results on reconstruction of optical vector-fields using measurements acquired by an optical fibre characterised by a non-unitary integral transform with an unknown spatially-variant kernel. A new imaging framework will be introduced, which through regularisation is able to recover an optical vector-field with respect to an arbitrary representation system potentially different from the one used for fibre calibration. In particular, this enables the recovery of an optical vector-field with respect to a Fourier basis, which is shown to yield indicative features of increased scattering associated with tissue abnormalities. The effectiveness of this framework is demonstrated using biological tissue samples in an experimental setting where measurements are acquired by a fibre endoscope, and it is observed that indeed the recovered Fourier coefficients are useful in distinguishing healthy tissues from lesions in early stages of cancer. If time permits, I'll also briefly present a new method that enables recovery of non-unitary fibre transmission matrices necessary for minimally invasive optical imaging in inaccessible areas of the body.
INI 1
14:20 to 15:10 Geno Nikolov (Sofia University St. Kliment Ohridski)
Markov-type inequalities and extreme zeros of orthogonal polynomials
The talk is centered around the problem of finding (obtaining  tight two-sided bounds for)  the sharp constants in certain Markov-Bernstein type inequalities in weighted $L_2$ norms. It turns out that, under certain assumptions, this problem is equivalent to the estimation of the extreme zeros of orthogonal polynomials with respect to a measure supported on $R_{+}$. It will be shown how classical tools like the Euler-Rayleigh method and Gershgorin circle theorem produce surprisingly good bounds for the extreme zeros of the Jacobi, Gegenbauer and Laguerre polynomials. The sharp constants in the $L_2$  Markov inequalities with the Laguerre and Gegenbauer weight functions and in a discrete $\ell_2$ Markov-Bernstein inequality are investigated using the same tool.
INI 1
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons