skip to content

Information-based methods in dynamic learning

Friday 22nd July 2011 - 11:30 to 12:30
INI Seminar Room 1
The history of information/entropy in learning due to Blackwell, Renyi, Lindley and others is sketched. Using results of de Groot, with new proofs, we arrive at a general class of information functions which gives "expected" learning in the Bayes sense. It is shown how this is intimately connected with the theory of majorization: learning means a more peaked distribution in a majorization sense. Counter-examples show that in some real situations it is possible to un-learn in the sense of having a less peaked posterior than prior. This does not happen in the standard Gaussian case, but does in cases such as the Beta-mixed binomial. Applications are made to experimental design. With designs for non-linear and dynamic system an idea of "local learning" is defined, in which the above theory is applied locally. Some connection with ideas of "active learning" in the machine learning area is attempted.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
Presentation Material: 
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons