skip to content

Practical and information-theoretic limitations in high-dimensional inference

Tuesday 8th January 2008 - 09:00 to 10:00
INI Seminar Room 1
Session Chair: 
Di Cook

This talk considers questions of two types concerning high-dimensional inference. First, given a practical (polynomial-time) algorithm, what are the limits of its performance? Second, how do such practical limitations compare to information-theoretic bounds, which apply to the performance of any algorithm regardless of computational complexity?

We analyze these issues in high-dimensional versions of two canonical inference problems: (a) support recovery in sparse regression; and (b) the sparse PCA or eigenvector problem. For the sparse regression problem, we describe a sharp threshold on the sample size n that controls success/failure of \ell_1 constrained quadratic programming (the Lasso), as function of the problem size p, and sparsity index k (number of non-zero entries). Using information-theoretic methods, we prove that the Lasso is order-optimal for sublinear sparsity (vanishing k/p), but sub-optimal for linear sparsity (k/p bounded away from zero). For the sparse eigenvector problem, we analyze a semidefinite programming relaxation due to Aspremont et al., and establish a similar transition in failure/success for triplets (n,p,k) tending to infinity.

Based on joint works with Arash Amini, John Lafferty, and Pradeep Ravikumar.

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons