Skip to content

SCH

Seminar

Learning in high dimensions, noise, sparsity and treelets

Nadler, B (Weizmann)
Wednesday 09 January 2008, 16:30-17:30

Seminar Room 1, Newton Institute

Abstract

In recent years there is growing practical need to perform learning (classification,regression, etc) in high dimensional settings where p>>n. Consequently instead of the standard limit $n\to\infty$, learning algorithms are typically analyzed in the joint limit $p,n\to\infty$. In this talk we present a different approach, that keeps $p,n$ fixed, but considers noise as a small parameter. This resulting perturbation analysis reveals the importance of a robust low dimensional representation of the noise-free signals, the possible failure of simple variable selection methods and the key role of sparsity for the success of learning in high dimensions. We also discuss sparsity in a-priori unknown basis and a possible data-driven adaptive construction of such basis, called treelets. We present a few applications of our analysis, mainly to error-in-variables linear regression problems, principal component analysis, and rank determination.

Related Links

Presentation

[pdf ]

Audio

MP3MP3

Video

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.

Back to top ∧