Seminar Room 2, Newton Institute Gatehouse
Substantial progress has recently been made on understanding the behaviour of sparse linear models in the high-dimensional setting, where the number the variables can greatly exceed the number of samples. This problem has attracted the interest of multiple communities, including applied mathematics, signal processing, statistics and machine learning. But linear models often rely on unrealistically strong assumptions, made mainly for convenience. Going beyond parametric models, can we understand the properties of high-dimensional functions that enable them to be estimated accurately from sparse data? In this talk we present some progress on this problem, showing that many of the recent results for sparse linear models can be extended to the infinite-dimensional setting of nonparametric function estimation. In particular, we present some theory for estimating sparse additive models, together with algorithms that are scalable to high dimensions. We illustrate these ideas with an application to functional sparse coding of natural images. This is joint work with Han Liu, Pradeep Ravikumar, and Larry Wasserman.