July - December 1997

**Organisers**: C M Bishop (*Aston*), D Haussler (*UCSC*), G E Hinton (*Toronto*), M Niranjan (*Cambridge*), L G Valiant (*Harvard*)

Research in machine learning has advanced significantly in recent years, stimulated in part by the emergence of a range of successful, large-scale applications. Examples include optical character recognition, classification of sleep stages from EEG signals, cervical smear screening, and real-time tokamak plasma control. At the same time there have been many impressive developments in the theoretical foundations of this field, arising from several complementary approaches. Concepts from statistical pattern recognition have been used to formulate a general framework for machine learning based on statistical inference. Parallel developments in computational learning theory have led to a characterisation of computational and sample-size requirements for learning problems, while also resulting in powerful new algorithms. In addition, concepts from information theory, differential geometry and statistical mechanics have been exploited to give alternative insights into neural networks. The principal aims of this programme are to promote greater inter-disciplinary collaboration between researchers with different theoretical perspectives, to strive for a more unified mathematical framework for neural networks and machine learning, and to stimulate the development of new algorithms for practical applications.