Videos and presentation materials from other INI events are also available.
Event  When  Speaker  Title  Presentation Material 

ASCW01 
18th February 2019 09:40 to 10:15 
Henryk Wozniakowski  Exponential tractability of weighted tensor product problems  
ASCW01 
18th February 2019 11:00 to 11:35 
Aicke Hinrichs 
Random sections of ellipsoids and the power of random information
We study the circumradius of the intersection of an $m$dimensional ellipsoid~$\mathcal E$ with half axes $\sigma_1\geq\dots\geq \sigma_m$ with random subspaces of codimension $n$. We find that, under certain assumptions on $\sigma$, this random radius $\mathcal{R}_n=\mathcal{R}_n(\sigma)$ is of the same order as the minimal such radius $\sigma_{n+1}$ with high probability. In other situations $\mathcal{R}_n$ is close to the maximum~$\sigma_1$. The random variable $\mathcal{R}_n$ naturally corresponds to the worstcase error of the best algorithm based on random information for $L_2$approximation of functions from a compactly embedded Hilbert space $H$ with unit ball $\mathcal E$. In particular, $\sigma_k$ is the $k$th largest singular value of the embedding $H\hookrightarrow L_2$. In this formulation, one can also consider the case $m=\infty$, and we prove that random information behaves very differently depending on whether $\sigma \in \ell_2$ or not. For $\sigma \notin \ell_2$ random information is completely useless. For $\sigma \in \ell_2$ the expected radius of random information tends to zero at least at rate $o(1/\sqrt{n})$ as $n\to\infty$. In the proofs we use a comparison result for Gaussian processes a la Gordon, exponential estimates for sums of chisquared random variables, and estimates for the extreme singular values of (structured) Gaussian random matrices. This is joint work with David Krieg, Erich Novak, Joscha Prochno and Mario Ullrich. 

ASCW01 
18th February 2019 11:40 to 12:15 
Yuri Malykhin  On some lower bounds for Kolmogorov widths  
ASCW01 
18th February 2019 13:40 to 14:15 
Jan Vybiral 
Approximation of Ridge Functions and Sparse Additive Models
The approximation of smooth multivariate functions is known to suffer the curse of dimension. We discuss approximation of structured multivariate functions, which take the form of a ridge, their sum, or of the socalled sparse additive models. We give also results about optimality of such algorithms.


ASCW01 
18th February 2019 14:20 to 14:55 
Alexander Litvak 
Order statistics and MallatZeitouni problem
Let $X$ be an $n$dimensional random centered Gaussian vector with independent but not necessarily identically distributed coordinates and let $T$ be an orthogonal transformation of $\mathbb{R}^n$. We show that the random vector $Y=T(X)$ satisfies $$\mathbb{E} \sum \limits_{j=1}^k j\mbox{}\min _{i\leq n}{X_{i}}^2 \leq C \mathbb{E} \sum\limits_{j=1}^k j\mbox{}\min _{i\leq n}{Y_{i}}^2$$ for all $k\leq n$, where ``$j\mbox{}\min$'' denotes the $j$th smallest component of the corresponding vector and $C>0$ is a universal constant. This resolves (up to a multiplicative constant) an old question of S.Mallat and O.Zeitouni regarding optimality of the KarhunenLo\`eve basis for the nonlinear reconstruction. We also show some relations for order statistics of random vectors (not only Gaussian), which are of independent interest. This is a joint work with Konstantin Tikhomirov. 

ASCW01 
18th February 2019 15:30 to 16:05 
Mario Ullrich 
Construction of highdimensional point sets with small dispersion
Based on deep results from coding theory, we present an deterministic algorithm that contructs a point set with dispersion at most $\eps$ in dimension $d$ of size $poly(1/\eps)*\log(d)$, which is optimal with respect to the dependence on $d$. The running time of the algorithms is, although superexponential in $1/\eps$, only polynomial in $d$.


ASCW01 
18th February 2019 16:10 to 16:45 
Dicussion  
ASCW01 
19th February 2019 09:00 to 09:35 
Winfried Sickel 
The Haar System and Smoothness Spaces built on Morrey Spaces
For some Nikol'skijBesov spaces $B^s_{p,q}$ the orthonormal Haar system can be used as an unconditional Schauder basis. Nowadays necessary and sufficient conditions with respect to $p,q$ and $s$ are known for this property. In recent years in a number of papers some modifications of Nikol'skijBesov spaces based on Morrey spaces have been investigated. In my talk I will concentrate on a version called Besovtype spaces and denoted by $B^{s,\tau}_{p,q}$. It will be my aim to discuss some necessary and some sufficient conditions on the parameters $p,q,s,\tau$ such that one can characterize these classes by means of the Haar system. This is joined work with Dachun Yang and Wen Yuan (Beijing Normal University).


ASCW01 
19th February 2019 09:40 to 10:15 
Dachun Yang 
Ball Average Characterizations of Function Spaces It is well known that function spaces play an important role in the study on various problems from analysis. In this talk, we present pointwise and ball average characterizations of function spaces including Sobolev spaces, Besov spaces and TriebelLizorkin spaces on the Euclidean spaces. These characterizations have the advantages so that they can be used as the definitions of these function spaces on metric measure spaces. Some open questions are also presented in this talk. 

ASCW01 
19th February 2019 11:00 to 11:35 
Wen Yuan 
Embedding and continuity envelopes of Besovtype spaces In this talk, we discuss about the sharp embedding properties between Besovtype spaces and TriebelLizorkintype and present some related necessary and sufficient conditions for these embedding. The corresponding continuity envelopes are also worked out. 

ASCW01 
19th February 2019 11:40 to 12:15 
Bin Han 
Directional Framelets with Low Redundancy and Directional Quasitight Framelets Edge singularities are ubiquitous and hold key information for many highdimensional problems. Consequently, directional representation systems are required to effectively capture edge singularities for highdimensional problems. However, the increased angular resolution often significantly increases the redundancy rates of a directional system. High redundancy rates lead to expensive computational costs and large storage requirement, which hinder the usefulness of such directional systems for problems in moderately high dimensions such as video processing. In this talk, we attack this problem by using directional tensor product complex tight framelets with mixed sampling factors. Such introduced directional system has good directionality with a very low redundancy rate $\frac{3^d1}{2^d1}$, e.g., the redundancy rates are $2$, $2\frac{2}{3}$, $3\frac{5}{7}$, $5\frac{1}{3}$ and $7\frac{25}{31}$ for dimension $d=1,\ldots,5$. Our numerical experiments on image/video denoising and inpainting show that the performance of our proposed directional system with low redundancy rate is comparable or better than several stateoftheart methods which have much higher redundancy rates. In the second part, we shall discuss our recent developments of directional quasitight framelets in high dimensions. This is a joint work with Chenzhe Diao, Zhenpeng Zhao and Xiaosheng Zhuang. 

ASCW01 
19th February 2019 13:40 to 14:15 
Clayton Webster 
Polynomial approximation via compressed sensing of highdimensional functions on lower sets
This talk will focus on compressed sensing approaches to sparse polynomial approximation of complex functions in high dimensions. Of particular interest is the parameterized PDE setting, where the target function is smooth, characterized by a rapidly decaying orthonormal expansion, whose most important terms are captured by a lower (or downward closed) set. By exploiting this fact, we will present and analyze several procedures for exactly reconstructing a set of (jointly) sparse vectors, from incomplete measurements. These include novel weighted $\ell_1$ minimization, improved iterative hard thresholding, mixed convex relaxations, as well as nonconvex penalties. Theoretical recovery guarantees will also be presented based on improved bounds for the restricted isometry property, as well as unified null space properties that encompass all currently proposed nonconvex minimizations. Numerical examples are provided to support the theoretical results and demonstrate the computational efficiency of the described compressed sensing methods.


ASCW01 
19th February 2019 14:20 to 14:55 
Oscar Dominguez 
Characterizations of Besov spaces in terms of Kfunctionals
Besov spaces occur naturally in many fields of analysis. In this talk, we discuss various characterizations of Besov spaces in terms of different Kfunctionals. For instance, we present descriptions via oscillations, Bianchinitype norms and approximation methods. This is a joint work with S. Tikhonov (Barcelona).


ASCW01 
19th February 2019 15:30 to 16:05 
Wenrui Ye 
Local restriction theorem and maximal BochnerRiesz operator for the Dunkl transforms In this talk, I will mainly describe joint work with Dr. Feng Dai on the critical index for the almost everywhere convergence of the BochnerRiesz means in weighted Lpspaces with p=1 or p>2. Our results under the case p>2 are in full analogy with the classical result of M. Christ on estimates of the maximal BochnerRiesz means of Fourier integrals and the classical result of A. Carbery, José L. Rubio De Francia and L. Vega on a.e. convergence of Fourier integrals. Besides, I will also introduce several new results that are related to our main results, including: (i) local restriction theorem for the Dunkl transform which is significantly stronger than the global one, but more difficult to prove; (ii) the weighted Littlewood Paley inequality with Apweights in the Dunkl noncommutative setting; (iii) sharp local pointwise estimates of several important kernel functions. 

ASCW01 
19th February 2019 16:10 to 16:45 
Discussion  
ASCW01 
20th February 2019 09:00 to 09:35 
Holger Rauhut 
Recovery of functions of many variables via compressive sensing
The talk will report on the use of compressive sensing for the recovery of functions of many variables from sample values. We will cover trigonometric expansions as well as expansions in tensorized orthogonal polynomial systems and provide convergence rates in terms of the number of samples which avoid the curse of dimensionality. The technique can be used for the numerical solution of parametric operator equations.


ASCW01 
20th February 2019 09:40 to 10:15 
Dũng Dinh 
Dimensiondependence error estimates for sampling recovery on Smolyak grids We investigate dimensiondependence estimates of the approximation error for linear algorithms of sampling recovery on Smolyak grids parametrized by $m$, of periodic $d$variate functions from the space with LipschitzH\"older mixed smoothness $\alpha > 0$. For the subsets of the unit ball in this space of functions with homogeneous condition and of functions depending on $\nu$ active variables ($1 \le \nu \le d$), respectively, we prove some upper bounds and lower bounds (for $\alpha \le 2$) of the error of the optimal sampling recovery on Smolyak grids, explicit in $d$, $\nu$, $m$ when $d$ and $m$ may be large. This is a joint work with Mai Xuan Thao, Hong Duc University, Thanh Hoa, Vietnam. 

ASCW01 
20th February 2019 11:00 to 11:35 
Martin Buhmann 
Recent Results on Rational Approximation and Interpolation with Completely and Multiply Monotone Radial Basis Functions
We will report on new results about approximations to continuous functions of multiple variables. We shall use either approximation with interpolation or approximation by rational functions. For these kinds of approximations, radial basis functions are particularly attractive, as they provide regular, positive definite or conditionally positive definite approximations, independent of the spatial dimension and independent the distribution of the data points we wish to work with. These interpolants have very many applications for example in solving nonlinear partial differential equations by collocation. In this talk, we classify radial basis and other functions that are useful for such scattered data interpolation or for rational approximations from vector spaces spanned by translates of those basis functions (kernels); for this we study in particular multiply and/or completely monotone functions. We collect special properties of such monotone functions, generalise them and find larger classes than the well known monotone functions for multivariate interpolation. Furthermore, we discuss efficient ways to compute rational approximations using the same type of kernels.


ASCW01 
20th February 2019 11:40 to 12:15 
Lutz Kaemmerer 
Multiple Rank1 Lattices as Sampling Schemes for Approximation
The approximation of functions using sampling values
along single rank1 lattices leads to convergence rates of the approximation
errors that are far away from optimal ones in spaces of dominating mixed
smoothness.
A recently published idea that uses sampling
values along several rank1 lattices in order to reconstruct multivariate
trigonometric polynomials accompanied by fast methods for the construction of
these sampling schemes as well as available fast Fourier transform algorithms
motivates investigations on the approximation properties of the arising
sampling operators applied on functions of specific smoothness, in particular
functions of dominating mixed smoothness which naturally leads to hyperbolic
cross approximations.


ASCW01 
21st February 2019 09:00 to 09:35 
Thomas Kuehn 
Preasymptotic estimates for approximation of multivariate periodic Sobolev functions
Approximation of Sobolev functions is a topic with a long history and many applications in different branches of mathematics. The asymptotic order as $n\to\infty$ of the approximation numbers $a_n$ is wellknown for embeddings of isotropic Sobolev spaces and also for Sobolev spaces of dominating mixed smoothness. However, if the dimension $d$ of the underlying domain is very high, one has to wait exponentially long until the asymptotic rate becomes visible. Hence, for computational issues this rate is useless, what really matters is the preasymptotic range, say $n\le 2^d$. In the talk I will first give a short overview over this relatively new field. Then I will present some new preasymptotic estimates for $L_2$approximation of periodic Sobolev functions, which improve the previously known results. I will discuss the cases of isotropic and dominating mixed smoothness, and also $C^\infty$functions of Gevrey type. Clearly, on all these spaces there are many equivalent norms. It is an interesting effect that  in contrast to the asymptotic rates  the preasymptotic behaviour strongly depends on the chosen norm. 

ASCW01 
21st February 2019 09:40 to 10:15 
Konstantin Ryutin 
Best mterm approximation of the "stepfunction" and related problems
The main point of the talk is the problem of approximation of the stepfunction by $m$term trigonometric polynomials and some closely related problems: the approximate rank of a specific triangular matrix, the Kolmogorov width of BV functions. This problem has its origins in approximation theory (best sparse approximation and Kolmogorov widths) as well as in computer science (approximate rank of a matrix). There are different approaches and techniques: $\gamma_2$norm, random approximations, orthomassivity of a set.... I plan to show what can be achieved by these techniques.


ASCW01 
21st February 2019 11:00 to 11:35 
Michael Gnewuch 
Explicit error bounds for randomized Smolyak algorithms and an application to infinitedimensional integration
Smolyak's method, also known as hyperbolic cross approximation or sparse grid method, is a powerful %black box tool to tackle multivariate tensor product problems just with the help of efficient algorithms for the corresponding univariate problem. We provide upper and lower error bounds for randomized Smolyak algorithms with fully explicit dependence on the number of variables and the number of information evaluations used. The error criteria we consider are the worstcase root mean square error (the typical error criterion for randomized algorithms, often referred to as ``randomized error'') and the root mean square worstcase error (often referred to as ``worstcase error''). Randomized Smolyak algorithms can be used as building blocks for efficient methods, such as multilevel algorithms, multivariate decomposition methods or dimensionwise quadrature methods, to tackle successfully highdimensional or even infinitedimensional problems. As an example, we provide a very general and sharp result on infinitedimensional integration on weighted reproducing kernel Hilbert spaces and illustrate it for the special case of weighted Korobov spaces. We explain how this result can be extended, e.g., to spaces of functions whose smooth dependence on successive variables increases (``spaces of increasing smoothness'') and to the problem of L_2approximation (function recovery). 

ASCW01 
21st February 2019 11:40 to 12:15 
Heping Wang 
Monte Carlo methods for $L_q$ approximation on periodic Sobolev spaces with mixed smoothness
In this talk we consider multivariate approximation of compact embeddings of periodic Sobolev spaces of dominating mixed smoothness into the $L_q,\ 2< q\leq \infty$ space by linear Monte Carlo methods that use arbitrary linear information. We construct linear Monte Carlo methods and obtain explicitindimension upper estimates. These estimates catch up with the rate of convergence.


ASCW01 
21st February 2019 13:40 to 14:15 
Robert J. Kunsch 
Optimal Confidence for Monte Carlo Integration of Smooth Functions
We study the complexity $n(\varepsilon,\delta)$ of approximating the integral of smooth functions at absolute precision $\varepsilon > 0$ with confidence level $1  \delta \in (0,1)$ using function evaluations as information within randomized algorithms. Methods that achieve optimal rates in terms of the root mean square error (RMSE) are not always optimal in terms of error at confidence, usually we need some nonlinearity in order to suppress outliers. Besides, there are numerical problems which can be solved in terms of error at confidence but no algorithm can guarantee a finite RMSE, see [1]. Hence, the new error criterion seems to be more general than the classical RMSE. The sharp order for multivariate functions from classical isotropic Sobolev spaces $W_p^r([0,1]^d)$ can be achieved via control variates, as long as the space is embedded in the space of continuous functions $C([0,1]^d)$. It turns out that the integrability index $p$ has an effect on the influence of the uncertainty $\delta$ to the complexity, with the limiting case $p = 1$ where deterministic methods cannot be improved by randomization. In general, higher smoothness reduces the effort we need to take in order to increase the confidence level. Determining the complexity $n(\varepsilon,\delta)$ is much more challenging for mixed smoothness spaces $\mathbf{W}_p^r([0,1]^d)$. While optimal rates are known for the classical RMSE (as long as $\mathbf{W}_p^r([0,1]^d)$ is embedded in $L_2([0,1]^d)$), see [2], basic modifications of the corresponding algorithms fail to match the theoretical lower bounds for approximating the integral with prescribed confidence. Joint work with Daniel Rudolf [1] R.J. Kunsch, E. Novak, D. Rudolf. Solvable integration problems and optimal sample size selection. To appear in Journal of Complexity. [2] M. Ullrich. A Monte Carlo method for integration of multivariate smooth functions. SIAM Journal on Numerical Analysis, 55(3):11881200, 2017. 

ASCW01 
21st February 2019 14:20 to 14:55 
Markus Weimar 
Optimal recovery using wavelet trees
This talk is concerned with the approximation of embeddings between Besovtype spaces defined on bounded multidimensional domains or (patchwise smooth) manifolds. We compare the quality of approximations of three different strategies based on wavelet expansions. For this purpose, sharp rates of convergence corresponding to classical uniform refinement, best $N$term, and best $N$term tree approximation will be presented. In particular, we will see that whenever the embedding of interest is compact, greedy tree approximation schemes are as powerful as abstract best $N$term approximation and that (for a large range of parameters) they can outperform uniform schemes based on a priori fixed (hence nonadaptively chosen) subspaces. This observation justifies the usage of adaptive nonlinear algorithms in computational practice, e.g., for the approximate solution of boundary integral equations arising from physical applications. If time permits, implications for the related concept of approximation spaces associated to the three approximation strategies will be discussed.


ASCW01 
21st February 2019 15:30 to 16:05 
Discussion  
ASCW01 
21st February 2019 16:10 to 16:45 
Discussion  
ASCW01 
22nd February 2019 09:00 to 09:35 
Jürgen Prestin 
Shiftinvariant Spaces of Multivariate Periodic Functions
One of the underlying ideas of multiresolution and wavelet analysis consists in the investigation of shiftinvariant function spaces. In this talk onedimensional shiftinvariant spaces of periodic functions are generalized to multivariate shiftinvariant spaces on nontensor product patterns. These patterns are generated from a regular integer matrix. The decomposition of these spaces into shiftinvariant subspaces can be discussed by the properties of these matrices. For these spaces we study different bases and their timefrequency localization. Of particular interest are multivariate orthogonal Dirichlet and de la Valle\'e Poussin kernels and the respective wavelets. This approach also leads to an adaptive multiresolution. Finally, with these methods we construct shearlets and show how we can detect jump discontinuities of given cartoonlike functions.


ASCW01 
22nd February 2019 09:40 to 10:15 
Bastian Bohn 
Least squares regression on sparse grids
In this talk, we first recapitulate the framework of least squares regression on certain sparse grid and hyperbolic cross spaces. The underlying numerical problem can be solved quite efficiently with stateoftheart algorithms. Analyzing its stability and convergence properties, we can derive the optimal coupling between the number of necessary data samples and the degrees of freedom in the ansatz space.Our analysis is based on the assumption that the leastsquares solution employs some kind of Sobolev regularity of dominating mixed smoothness, which is seldomly encountered for realworld applications. Therefore, we present possible extensions of the basic sparse grid least squares algorithm by introducing suitable apriori data transformations in the second part of the talk. These are tailored such that the resulting transformed problem suits the sparse grid structure. Coauthors: Michael Griebel (University of Bonn), Jens Oettershagen (University of Bonn), Christian Rieger (University of Bonn) 

ASCW01 
22nd February 2019 11:00 to 11:35 
Song Li 
Some Sparse Recovery Methods in Compressed Sensing
In this talk, I shall investigate some sparse recovery methods in Compressed Sensing. In particular, I will focus on RIP approach and DRIP approach. As a result, we confirmed a conjecture on RIP, which is related to Terence. Tao and Jean. Bourgain's works in this fields. Then, I will also investigate the relations between our works and statistics.


ASCW01 
22nd February 2019 11:40 to 12:15 
tba  
ASCW01 
22nd February 2019 13:40 to 16:45 
Discussion 