skip to content
 

Timetable (ASCW01)

Challenges in optimal recovery and hyperbolic cross approximation

Monday 18th February 2019 to Friday 22nd February 2019

Monday 18th February 2019
09:00 to 09:25 Registration
09:25 to 09:35 Welcome from Christie Marr (INI Deputy Director)
09:40 to 10:15 Henryk Wozniakowski (Columbia University); (University of Warsaw)
Exponential tractability of weighted tensor product problems
INI 1
10:20 to 11:00 Morning Coffee
11:00 to 11:35 Aicke Hinrichs (Johannes Kepler Universität)
Random sections of ellipsoids and the power of random information
We study the circumradius of the intersection of an $m$-dimensional ellipsoid~$\mathcal E$ with half axes $\sigma_1\geq\dots\geq \sigma_m$ with random subspaces of codimension $n$. We find that, under certain assumptions on $\sigma$, this random radius $\mathcal{R}_n=\mathcal{R}_n(\sigma)$ is of the same order as the minimal such radius $\sigma_{n+1}$ with high probability. In other situations $\mathcal{R}_n$ is close to the maximum~$\sigma_1$. The random variable $\mathcal{R}_n$ naturally corresponds to the worst-case error of the best algorithm based on random information for $L_2$-approximation of functions from a compactly embedded Hilbert space $H$ with unit ball $\mathcal E$.

In particular, $\sigma_k$ is the $k$th largest singular value of the embedding $H\hookrightarrow L_2$. In this formulation, one can also consider the case $m=\infty$, and we prove that random information behaves very differently depending on whether $\sigma \in \ell_2$ or not. For $\sigma \notin \ell_2$ random information is completely useless. For $\sigma \in \ell_2$ the expected radius of random information tends to zero at least at rate $o(1/\sqrt{n})$ as $n\to\infty$.

In the proofs we use a comparison result for Gaussian processes a la Gordon, exponential estimates for sums of chi-squared random variables, and estimates for the extreme singular values of (structured) Gaussian random matrices.

This is joint work with David Krieg, Erich Novak, Joscha Prochno and Mario Ullrich.
INI 1
11:40 to 12:15 Yuri Malykhin (Steklov Mathematical Institute, Russian Academy of Sciences); (Moscow State University)
On some lower bounds for Kolmogorov widths
INI 1
12:20 to 13:40 Lunch at Churchill College
13:40 to 14:15 Jan Vybiral (Charles University, Prague)
Approximation of Ridge Functions and Sparse Additive Models
The approximation of smooth multivariate functions is known to suffer the curse of dimension. We discuss approximation of structured multivariate functions, which take the form of a ridge, their sum, or of the so-called sparse additive models. We give also results about optimality of such algorithms.
INI 1
14:20 to 14:55 Alexander Litvak (University of Alberta)
Order statistics and Mallat--Zeitouni problem
Let $X$ be an $n$-dimensional random centered Gaussian vector with independent but not necessarily identically distributed coordinates and let $T$ be an orthogonal transformation of $\mathbb{R}^n$. We show that the random vector $Y=T(X)$ satisfies $$\mathbb{E} \sum \limits_{j=1}^k j\mbox{-}\min _{i\leq n}{X_{i}}^2 \leq C \mathbb{E} \sum\limits_{j=1}^k j\mbox{-}\min _{i\leq n}{Y_{i}}^2$$ for all $k\leq n$, where ``$j\mbox{-}\min$'' denotes the $j$-th smallest component of the corresponding vector and $C>0$ is a universal constant. This resolves (up to a multiplicative constant) an old question of S.Mallat and O.Zeitouni regarding optimality of the Karhunen--Lo\`eve basis for the nonlinear reconstruction. We also show some relations for order statistics of random vectors (not only Gaussian), which are of independent interest. This is a joint work with Konstantin Tikhomirov.


INI 1
15:00 to 15:30 Afternoon Tea
15:30 to 16:05 Mario Ullrich (Johannes Kepler Universität)
Construction of high-dimensional point sets with small dispersion
Based on deep results from coding theory, we present an deterministic algorithm that contructs a point set with dispersion at most $\eps$ in dimension $d$ of size $poly(1/\eps)*\log(d)$, which is optimal with respect to the dependence on $d$. The running time of the algorithms is, although super-exponential in $1/\eps$, only polynomial in $d$.
INI 1
16:10 to 16:45 Dicussion INI 1
17:00 to 18:00 Welcome Wine Reception at INI
Tuesday 19th February 2019
09:00 to 09:35 Winfried Sickel (Friedrich-Schiller-Universität Jena)
The Haar System and Smoothness Spaces built on Morrey Spaces
For some  Nikol'skij-Besov spaces $B^s_{p,q}$ the orthonormal Haar system can be used as an unconditional Schauder basis. Nowadays necessary and sufficient conditions with respect to $p,q$ and $s$ are known for this property. In recent years in a number of papers some modifications of Nikol'skij-Besov spaces based on Morrey spaces have been investigated. In my talk I will concentrate on a version called Besov-type spaces and denoted by $B^{s,\tau}_{p,q}$. It will be my aim to discuss some necessary and some sufficient conditions on the parameters $p,q,s,\tau$ such that one can characterize these classes by means of the Haar system. This is joined work with Dachun Yang and Wen Yuan (Beijing Normal University).
INI 1
09:40 to 10:15 Dachun Yang (Beijing Normal University)
Ball Average Characterizations of Function Spaces

It is well known that function spaces play an important role in the study on various problems from analysis. In this talk, we present pointwise and ball average characterizations of function spaces including Sobolev spaces, Besov spaces and Triebel-Lizorkin spaces on the Euclidean spaces. These characterizations have the advantages so that they can be used as the definitions of these function spaces on metric measure spaces. Some open questions are also presented in this talk.

INI 1
10:20 to 11:00 Morning Coffee
11:00 to 11:35 Wen Yuan (Beijing Normal University)
Embedding and continuity envelopes of Besov-type spaces

In this talk, we discuss about the sharp embedding properties between Besov-type spaces and Triebel-Lizorkin-type and present some related necessary and sufficient conditions for these embedding. The corresponding continuity envelopes are also worked out.

INI 1
11:40 to 12:15 Bin Han (University of Alberta); (University of Alberta)
Directional Framelets with Low Redundancy and Directional Quasi-tight Framelets

Edge singularities are ubiquitous and hold key information for many high-dimensional problems. Consequently, directional representation systems are required to effectively capture edge singularities for high-dimensional problems. However, the increased angular resolution often significantly increases the redundancy rates of a directional system. High redundancy rates lead to expensive computational costs and large storage requirement, which hinder the usefulness of such directional systems for problems in moderately high dimensions such as video processing. In this talk, we attack this problem by using directional tensor product complex tight framelets with mixed sampling factors. Such introduced directional system has good directionality with a very low redundancy rate $\frac{3^d-1}{2^d-1}$, e.g., the redundancy rates are $2$, $2\frac{2}{3}$, $3\frac{5}{7}$, $5\frac{1}{3}$ and $7\frac{25}{31}$ for dimension $d=1,\ldots,5$. Our numerical experiments on image/video denoising and inpainting show that the performance of our proposed directional system with low redundancy rate is comparable or better than several state-of-the-art methods which have much higher redundancy rates. In the second part, we shall discuss our recent developments of directional quasi-tight framelets in high dimensions. This is a joint work with Chenzhe Diao, Zhenpeng Zhao and Xiaosheng Zhuang.

INI 1
12:20 to 13:40 Lunch at Churchill College
13:40 to 14:15 Clayton Webster (University of Tennessee); (Oak Ridge National Laboratory)
Polynomial approximation via compressed sensing of high-dimensional functions on lower sets
This talk will focus on compressed sensing approaches to sparse polynomial approximation of complex functions in high dimensions. Of particular interest is the parameterized PDE setting, where the target function is smooth, characterized by a rapidly decaying orthonormal expansion, whose most important terms are captured by a lower (or downward closed) set. By exploiting this fact, we will present and analyze several procedures for exactly reconstructing a set of (jointly) sparse vectors, from incomplete measurements.  These include novel weighted $\ell_1$ minimization, improved iterative hard thresholding, mixed convex relaxations, as well as nonconvex penalties. Theoretical recovery guarantees will also be presented based on improved bounds for the restricted isometry property, as well as unified null space properties that encompass all currently proposed nonconvex minimizations.  Numerical examples are provided to support the theoretical results and demonstrate the computational efficiency of the described compressed sensing methods. 
INI 1
14:20 to 14:55 Oscar Dominguez (Universidad Complutense de Madrid)
Characterizations of Besov spaces in terms of K-functionals
Besov spaces occur naturally in many fields of analysis. In this talk, we discuss various characterizations of Besov spaces in terms of different K-functionals. For instance, we present descriptions via oscillations, Bianchini-type norms and approximation methods. This is a joint work with S. Tikhonov (Barcelona).
INI 1
15:00 to 15:30 Afternoon Tea
15:30 to 16:05 Wenrui Ye (The University of International Business and Economics - Beijing)
Local restriction theorem and maximal Bochner-Riesz operator for the Dunkl transforms

In this talk, I will mainly describe joint work with Dr. Feng Dai on the critical index for the almost everywhere convergence of the Bochner-Riesz means in weighted Lp-spaces with p=1 or p>2. Our results under the case p>2 are in full analogy with the classical result of M. Christ on estimates of the maximal Bochner-Riesz means of Fourier integrals and the classical result of A. Carbery, José L. Rubio De Francia and L. Vega on a.e. convergence of Fourier integrals. Besides, I will also introduce several new results that are related to our main results, including: (i) local restriction theorem for the Dunkl transform which is significantly stronger than the global one, but more difficult to prove; (ii) the weighted Littlewood Paley inequality with Ap-weights in the Dunkl non-commutative setting; (iii) sharp local point-wise estimates of several important kernel functions.

INI 1
16:10 to 16:45 Discussion INI 1
Wednesday 20th February 2019
09:00 to 09:35 Holger Rauhut (RWTH Aachen University)
Recovery of functions of many variables via compressive sensing
The talk will report on the use of compressive sensing for the recovery of functions of many variables from sample values. We will cover trigonometric expansions as well as expansions in tensorized orthogonal polynomial systems and provide convergence rates in terms of the number of samples which avoid the curse of dimensionality. The technique can be used for the numerical solution of parametric operator equations.
INI 1
09:40 to 10:15 Dũng Dinh (Vietnam National University)
Dimension-dependence error estimates for sampling recovery on Smolyak grids

We investigate dimension-dependence estimates of the approximation error for linear algorithms of sampling recovery on Smolyak grids parametrized by $m$, of periodic $d$-variate functions from the space with Lipschitz-H\"older mixed smoothness $\alpha > 0$. For the subsets of the unit ball in this space of functions with homogeneous condition and of functions depending on $\nu$ active variables ($1 \le \nu \le d$), respectively, we prove some upper bounds and lower bounds (for $\alpha \le 2$) of the error of the optimal sampling recovery on Smolyak grids, explicit in $d$, $\nu$, $m$ when $d$ and $m$ may be large. This is a joint work with Mai Xuan Thao, Hong Duc University, Thanh Hoa, Vietnam.

INI 1
10:20 to 11:00 Morning Coffee
11:00 to 11:35 Martin Buhmann (Justus-Liebig-Universität Gießen)
Recent Results on Rational Approximation and Interpolation with Completely and Multiply Monotone Radial Basis Functions
We will report on new results about approximations to continuous functions of multiple variables. We shall use either approximation with interpolation or approximation by rational functions. For these kinds of approximations, radial basis functions are particularly attractive, as they provide regular, positive definite or conditionally positive definite approximations, independent of the spatial dimension and independent the distribution of the data points we wish to work with. These interpolants have very many applications for example in solving nonlinear partial differential equations by collocation. In this talk, we classify radial basis and other functions that are useful for such scattered data interpolation or for rational approximations from vector spaces spanned by translates of those basis functions (kernels); for this we study in particular multiply and/or completely monotone functions. We collect special properties of such monotone functions, generalise them and find larger classes than the well known monotone functions for multivariate interpolation. Furthermore, we discuss efficient ways to compute rational approximations using the same type of kernels.
INI 1
11:40 to 12:15 Lutz Kaemmerer (Technische Universität Chemnitz)
Multiple Rank-1 Lattices as Sampling Schemes for Approximation
The approximation of functions using sampling values along single rank-1 lattices leads to convergence rates of the approximation errors that are far away from optimal ones in spaces of dominating mixed smoothness. A recently published idea that uses sampling values along several rank-1 lattices in order to reconstruct multivariate trigonometric polynomials accompanied by fast methods for the construction of these sampling schemes as well as available fast Fourier transform algorithms motivates investigations on the approximation properties of the arising sampling operators applied on functions of specific smoothness, in particular functions of dominating mixed smoothness which naturally leads to hyperbolic cross approximations.
INI 1
12:20 to 13:40 Lunch at Churchill College
13:40 to 18:00 Free afternoon
19:30 to 22:00 Formal Dinner at Emmanuel College (Old Library)


LOCATION
Emmanuel College
How to get there

MENU
   

Starters

Butternut Squash Veloute with Pumpkin Seed Oil (V)  
Main course
Supreme of Chicken, Parsley Risotto, Crispy Chorizo  
Roasted Artichoke, Parsley Risotto, Crispy Kale (V)  
Selection of vegetables  
Dessert
Crème Brulee  
Coffee and Mints

DRESS CODE
Smart casual
Thursday 21st February 2019
09:00 to 09:35 Thomas Kuehn (Universität Leipzig)
Preasymptotic estimates for approximation of multivariate periodic Sobolev functions
Approximation of Sobolev functions is a topic with a long history and many applications in different branches of mathematics. The asymptotic order as $n\to\infty$ of the approximation numbers $a_n$ is well-known for embeddings of isotropic Sobolev spaces and also for Sobolev spaces of dominating mixed smoothness. However, if the dimension $d$ of the underlying domain is very high, one has to wait exponentially long until the asymptotic rate becomes visible. Hence, for computational issues this rate is useless, what really matters is the preasymptotic range, say $n\le 2^d$.  
In the talk I will first give a short overview over this relatively new field. Then I will present some new preasymptotic estimates for $L_2$-approximation of periodic Sobolev functions, which improve the previously known results. I will discuss the cases of isotropic and dominating mixed smoothness, and also $C^\infty$-functions of Gevrey type. Clearly, on all these spaces there are many equivalent norms. It is an interesting effect that - in contrast to the asymptotic rates - the preasymptotic behaviour strongly depends on the chosen norm.
INI 1
09:40 to 10:15 Konstantin Ryutin
Best m-term approximation of the "step-function" and related problems
The main point of the talk is  the problem of approximation    of the step-function by $m$-term trigonometric polynomials  and some closely related problems: the approximate rank of a specific triangular matrix,  the Kolmogorov width of BV functions. This problem has its origins  in approximation theory (best sparse approximation and Kolmogorov widths) as well as in computer science (approximate rank of a matrix). There are different approaches and techniques: $\gamma_2$--norm, random approximations, orthomassivity of a set....  I plan to show what can be achieved by these techniques.
INI 1
10:20 to 11:00 Morning Coffee
11:00 to 11:35 Michael Gnewuch (Christian-Albrechts-Universität zu Kiel)
Explicit error bounds for randomized Smolyak algorithms and an application to infinite-dimensional integration
Smolyak's method, also known as hyperbolic cross approximation or sparse grid method, is a powerful %black box tool to tackle multivariate tensor product problems just with the help of efficient algorithms for the corresponding univariate problem. We provide upper and lower error bounds for randomized Smolyak algorithms with fully explicit dependence on the number of variables and the number of information evaluations used. The error criteria we consider are the worst-case root mean square error (the typical error criterion for randomized algorithms, often referred to as ``randomized error'') and the root mean square worst-case error (often referred to as ``worst-case error''). Randomized Smolyak algorithms can be used as building blocks for efficient methods, such as multilevel algorithms, multivariate decomposition methods or dimension-wise quadrature methods, to tackle successfully high-dimensional or even infinite-dimensional problems. As an example, we provide a very general and sharp result on infinite-dimensional integration on weighted reproducing kernel Hilbert spaces and illustrate it for the special case of weighted Korobov spaces. We explain how this result can be extended, e.g., to spaces of functions whose smooth dependence on successive variables increases (``spaces of increasing smoothness'') and to the problem of L_2-approximation (function recovery).
INI 1
11:40 to 12:15 Heping Wang (Capital Normal University)
Monte Carlo methods for $L_q$ approximation on periodic Sobolev spaces with mixed smoothness
In this talk we consider multivariate approximation of compact embeddings of periodic Sobolev spaces of dominating mixed smoothness into the $L_q,\ 2< q\leq \infty$ space by linear Monte Carlo methods that use arbitrary linear information. We construct linear Monte Carlo methods and obtain explicit-in-dimension upper estimates. These estimates catch up with the rate of convergence.
INI 1
12:20 to 13:40 Lunch at Churchill College
13:40 to 14:15 Robert J. Kunsch (Universität Osnabrück)
Optimal Confidence for Monte Carlo Integration of Smooth Functions
We study the complexity $n(\varepsilon,\delta)$ of approximating the integral of smooth functions at absolute precision $\varepsilon > 0$ with confidence level $1 - \delta \in (0,1)$ using function evaluations as information within randomized algorithms. Methods that achieve optimal rates in terms of the root mean square error (RMSE) are not always optimal in terms of error at confidence, usually we need some non-linearity in order to suppress outliers. Besides, there are numerical problems which can be solved in terms of error at confidence but no algorithm can guarantee a finite RMSE, see [1]. Hence, the new error criterion seems to be more general than the classical RMSE. The sharp order for multivariate functions from classical isotropic Sobolev spaces $W_p^r([0,1]^d)$ can be achieved via control variates, as long as the space is embedded in the space of continuous functions $C([0,1]^d)$. It turns out that the integrability index $p$ has an effect on the influence of the uncertainty $\delta$ to the complexity, with the limiting case $p = 1$ where deterministic methods cannot be improved by randomization. In general, higher smoothness reduces the effort we need to take in order to increase the confidence level. Determining the complexity $n(\varepsilon,\delta)$ is much more challenging for mixed smoothness spaces $\mathbf{W}_p^r([0,1]^d)$. While optimal rates are known for the classical RMSE (as long as $\mathbf{W}_p^r([0,1]^d)$ is embedded in $L_2([0,1]^d)$), see [2], basic modifications of the corresponding algorithms fail to match the theoretical lower bounds for approximating the integral with prescribed confidence.

Joint work with Daniel Rudolf 

[1]  R.J. Kunsch, E. Novak, D. Rudolf. Solvable integration problems and optimal sample size selection. To appear in Journal of Complexity.
[2]  M. Ullrich. A Monte Carlo method for integration of multivariate smooth functions. SIAM Journal on Numerical Analysis, 55(3):1188-1200, 2017.


INI 1
14:20 to 14:55 Markus Weimar (Ruhr-Universität Bochum)
Optimal recovery using wavelet trees
This talk is concerned with the approximation of embeddings between Besov-type spaces defined on bounded multidimensional domains or (patchwise smooth) manifolds. We compare the quality of approximations of three different strategies based on wavelet expansions. For this purpose, sharp rates of convergence corresponding to classical uniform refinement, best $N$-term, and best $N$-term tree approximation will be presented. In particular, we will see that whenever the embedding of interest is compact, greedy tree approximation schemes are as powerful as abstract best $N$-term approximation and that (for a large range of parameters) they can outperform uniform schemes based on a priori fixed (hence non-adaptively chosen) subspaces. This observation justifies the usage of adaptive non-linear algorithms in computational practice, e.g., for the approximate solution of boundary integral equations arising from physical applications. If time permits, implications for the related concept of approximation spaces associated to the three approximation strategies will be discussed.
INI 1
15:00 to 15:30 Afternoon Tea
15:30 to 16:05 Discussion INI 1
16:10 to 16:45 Discussion INI 1
Friday 22nd February 2019
09:00 to 09:35 Jürgen Prestin (Universität zu Lübeck)
Shift-invariant Spaces of Multivariate Periodic Functions
One of the underlying ideas of multiresolution and wavelet analysis consists in the investigation of shift-invariant function spaces. In this talk one-dimensional shift-invariant spaces of periodic functions are generalized to multivariate shift-invariant spaces on non-tensor product patterns. These patterns are generated from a regular integer matrix. The decomposition of these spaces into shift-invariant subspaces can be discussed by the properties of these matrices. For these spaces we study different bases and their time-frequency localization. Of particular interest are multivariate orthogonal Dirichlet and de la Valle\'e Poussin kernels and the respective wavelets. This approach also leads to an adaptive multiresolution. Finally, with these methods we construct shearlets and show how we can detect jump discontinuities of given cartoon-like functions.
INI 1
09:40 to 10:15 Bastian Bohn (Universität Bonn)
Least squares regression on sparse grids
In this talk, we first recapitulate the framework of least squares regression on certain sparse grid and hyperbolic cross spaces. The underlying numerical problem can be solved quite efficiently with state-of-the-art algorithms. Analyzing its stability and convergence properties, we can derive the optimal coupling between the number of necessary data samples and the degrees of freedom in the ansatz space.Our analysis is based on the assumption that the least-squares solution employs some kind of Sobolev regularity of dominating mixed smoothness, which is seldomly encountered for real-world applications. Therefore, we present possible extensions of the basic sparse grid least squares algorithm by introducing suitable a-priori data transformations in the second part of the talk. These are tailored such that the resulting transformed problem suits the sparse grid structure.

Co-authors: Michael Griebel (University of Bonn), Jens Oettershagen (University of Bonn), Christian Rieger (University of Bonn)
INI 1
10:20 to 11:00 Morning Coffee
11:00 to 11:35 Song Li (Zhejiang University); (Zhejiang University)
Some Sparse Recovery Methods in Compressed Sensing
In this talk, I shall investigate some sparse recovery methods in Compressed Sensing. In particular, I will focus on RIP approach and D-RIP approach.  As a result, we confirmed a conjecture on RIP, which is related to Terence. Tao and Jean. Bourgain's works in this fields.  Then, I will also investigate the relations between our works and statistics.
INI 1
11:40 to 12:15 tba INI 1
12:20 to 13:40 Lunch buffet at the INI
13:40 to 16:45 Discussion INI 1
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons