# More unknowns than equations? Not a problem! Use Sparsity!

Presented by:
D Donoho Stanford University
Date:
Monday 17th March 2008 - 17:00 to 18:00
Venue:
INI Seminar Room 1
Abstract:
Everything you were taught about underdetermined systems of linear equations is wrong...

Okay, that's too strong. But you have been taught things in undergraduate linear algebra which, if you are an engineer or scientist, may be holding you back. The main one is that if you have more unknowns than equations, you're lost. Don't believe it. At the moment there are many interesting problems in the information sciences where researchers are currently confounding expectations by turning linear algebra upside down:

• (a) An standard magnetic resonance imaging device can now produce a clinical-quality image using a factor 8 less time than previously thought.
• (b) A Fourier imaging system can observe just the lowest frequencies of a sparse nonnegative signal and perfectly reconstruct all the unmeasured high frequencies of the signal.
• (c) a communications system can transmit a very weak signal perfectly in the presence of intermittent but arbitrarily powerful jamming.

Moreover, in each case the methods are convenient and computationally tractable.

Mathematically, what's going on is a recent explosion of interest in finding the sparsest solution to certain systems of underdetermined linear equations. This problem is known to be NP-Hard in general, and hence the problem sounds intractable. Surprisingly, in some particular cases, it has been found that one can find the sparsest solution by l¹ minimization, which is a convex optimization problem and so tractable. Many researchers are now actively working to explain and exploit this phenomenon. It's responsible for the examples given above.

In my talk, I'll discuss that this curious behavior of l¹ minimization and connect with some pure mathematics -- convex polytope theory and oriented matroid theory.

Ultimately, the pure math behind this phenomenon concerns some accessible but very surprising properties of random point clouds in very high dimensions: each point gets very neighborly!

I'll also explain the connection of this phenomenon to the Newton Institute's ongoing program "Statistical Theory and Methods for Complex, High-Dimensional Data".

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.