Skip to content

SCH

Seminar

Practical and information-theoretic limitations in high-dimensional inference

Wainwright, M (Berkeley)
Tuesday 08 January 2008, 09:00-10:00

Seminar Room 1, Newton Institute

Abstract

This talk considers questions of two types concerning high-dimensional inference. First, given a practical (polynomial-time) algorithm, what are the limits of its performance? Second, how do such practical limitations compare to information-theoretic bounds, which apply to the performance of any algorithm regardless of computational complexity?

We analyze these issues in high-dimensional versions of two canonical inference problems: (a) support recovery in sparse regression; and (b) the sparse PCA or eigenvector problem. For the sparse regression problem, we describe a sharp threshold on the sample size n that controls success/failure of \ell_1 constrained quadratic programming (the Lasso), as function of the problem size p, and sparsity index k (number of non-zero entries). Using information-theoretic methods, we prove that the Lasso is order-optimal for sublinear sparsity (vanishing k/p), but sub-optimal for linear sparsity (k/p bounded away from zero). For the sparse eigenvector problem, we analyze a semidefinite programming relaxation due to Aspremont et al., and establish a similar transition in failure/success for triplets (n,p,k) tending to infinity.

Based on joint works with Arash Amini, John Lafferty, and Pradeep Ravikumar.

Presentation

[pdf ]

Audio

MP3MP3

Video

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.

Back to top ∧