Slow subspace learning
Seminar Room 1, Newton Institute
Slow feature learning exploits the intuition that in realistic processes subsequently observed stimuli are likely to have the same interpretation while independently observed stimuli are likely to be interpreted differently. The talk discusses such a method for stationary, absolutely regular process taking values in a high dimensional space. A projection to a low-dimensional subspace is selected from a finite number of observations on the basis of a criterion which rewards data-variance and penalizes the variance of the velocity vector. Convergence theorems, error analysis and some experiments are reported.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.