skip to content

Slow subspace learning

Wednesday 21st May 2008 - 11:00 to 12:00
INI Seminar Room 1

Slow feature learning exploits the intuition that in realistic processes subsequently observed stimuli are likely to have the same interpretation while independently observed stimuli are likely to be interpreted differently. The talk discusses such a method for stationary, absolutely regular process taking values in a high dimensional space. A projection to a low-dimensional subspace is selected from a finite number of observations on the basis of a criterion which rewards data-variance and penalizes the variance of the velocity vector. Convergence theorems, error analysis and some experiments are reported.

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons