Isaac Newton Institute for Mathematical Sciences

Neural Networks and Machine Learning

July - December 1997

Organisers: C M Bishop (Aston), D Haussler (UCSC), G E Hinton (Toronto), M Niranjan (Cambridge), L G Valiant (Harvard)

WORKSHOP ON PULSED NEURAL NETWORKS

Isaac Newton Institute for Mathematical Sciences,

Cambridge, UK

26 and 27 August, 1997

Organisers: Wolfgang Maass and Chris Bishop

FINAL PROGRAMME

Abstracts Posters

Tuesday, August 26

9:00 - 10:15 Tutorial by Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland)
Motivation and Models for Spiking Neurons

10:15 - 10:45 Coffee-Break

10:45 - 12:00 Tutorial by Wolfgang Maass (Technische Universitaet Graz, Austria)
Computation and Coding in Networks of Spiking Neurons

12:00 - 14:00 Lunch

14:00 - 14:40 David Horn (Tel Aviv University, Israel)
Fast Temporal Encoding and Decoding with Spiking Neurons

14:40 - 15:20 John Shawe-Taylor (Royal Holloway, University of London)
Neural Modelling and Implementation via Stochastic Computing

15:20 - 16:00 Tea Break

16:00 - 16:40 Wolfgang Maass (Technische Universitaet Graz, Austria)
A Simple Model for Neural Computation with Pulse Rates and Pulse Correlations

16:40 - 17:20 Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland)
Hebbian Tuning of Delay Lines for Coincidence Detection in the Barn Owl Auditory System

17:20 - 18:00 Poster-Spotlights (5 minutes each)

18:00 - 19:00 Poster-Session (with wine reception)

19:00 Barbecue dinner at the Isaac Newton Institute

Wednesday, August 27

9:00 - 10:15 Tutorial by Alan F. Murray (University of Edinburgh)
Pulse-Based Computation in VLSI Neural Networks : Fundamentals

10:15 - 10:40 Coffee-Break

10:40 - 11:20 Alessandro Mortara (Centre Suisse d'Electronique et de Microtechnique, Neuchatel, Switzerland)
Communication and Computation using Spikes in Silicon Perceptive Systems

11:20 - 12:00 David P.M. Northmore (University of Delaware, USA)
Interpreting Spike Trains with Networks of Dendritic-Tree Neuromorphs

12:00 - 14:00 Lunch (During lunch we will discuss plans for an edited book on pulsed neural nets)

14:00 - 14:40 Alister Hamilton (University of Edinburgh)
Pulse Based Signal Processing for Programmable Analogue VLSI

14:40 - 15:20 Rodney Douglas (ETH Zurich, Switzerland)
A Communications Infrastructure for Neuromorphic Analog VLSI Systems

15:20 - 15:40 Coffee-Break

15:40 - 17:00 Plenary Discussion: Artifical Pulsed Neural Nets: Prospects and Problems


ABSTRACTS (in the order of the talks)

Tutorial by Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland)
Motivation and Models for Spiking Neurons

In this introductory tutorial I will try to explain some basic ideas of and provide a common language for pulsed neural nets. To do so I will

0) motivate the idea of pulse coding as opposed to rate coding

1) discuss the relation between various simplified models of spiking neurons (integrate-and-fire, Hodgkin-Huxley) and argue that the Spike Response Model (=linear response kernels + threshold) is a suitable framework to think about such models.

2) discuss typical phenoma of the dynamics in populations of spiking neurons (oscillations, asynchronous states), provide stability arguments and introduce an integral equation for the population dynamics. 3) review the idea of feature binding and pattern segmentation by a 'synchronicity code'.


Tutorial by Wolfgang Maass (Technische Universitaet Graz, Austria)
Computation and Coding in Networks of Spiking Neurons

This tutorial will provide an introduction to


David Horn (Tel Aviv University, Israel)
Fast Temporal Encoding and Decoding with Spiking Neurons

We propose a simple theoretical structure of interacting integrate and fire neurons that can handle fast information processing, and may account for the fact that only a few neuronal spikes suffice to transmit information in the brain. Using integrate and fire neurons that are subjected to individual noise and to a common external input, we calculate their first passage time (FPT), or inter-spike interval. We suggest using a population average for evaluating the FPT that represents the desired information. Instantaneous lateral excitation among these neurons helps the analysis. By employing a second layer of neurons with variable connections to the first layer, we represent the strength of the input by the number of output neurons that fire, thus decoding the temporal information. Such a model can easily lead to a logarithmic relation as in Weber's law. The latter follows naturally from information maximization, if the input strength is statistically distributed according to an approximate inverse law.


John Shawe-Taylor (Royal Holloway, University of London)
Neural Modelling and Implementation via Stochastic Computing

'Stochastic computing' studies computation performed by manipulating streams of random bits which represent real values via a frequency encoding. The paper will review results obtained in applying this approach to neural computation. The following topics will be covered:


Wolfgang Maass (Technische Universitaet Graz, Austria)
A Simple Model for Neural Computation with Pulse Rates and Pulse Correlations

A simple extension of standard neural network models is introduced, that provides a model for computations with pulses where both the pulse frequencies and correlations in pulse times between different pulse trains are computationally relevant. Such extension appears to be useful since it has been shown that firing correlations play a significant computational role in many biological neural systems, and there exist attempts tp transport this coding mechanism to artifical pulsed neural networks. Standard neural network models are only suitable for describing computations in terms of pulse rates.

The resulting extended neural network models are still relatively simple, so that their computational power can be analyzed theoretically. We prove rigorous separation results, which show that the use of pulse correlations in addition to pulse rates can increase the computational power of a neural network by a significant amount.


Wulfram Gerstner (Swiss Federal Institute of Technology, Lausanne, Switzerland)
Hebbian Tuning of Delay Lines for Coincidence Detection in the Barn Owl Auditory System

Owls can locate sound sources in the complete darkness with a remarkable precision. This capability requires auditory information processing with a temporal precision of less than 5 microseconds. How is this possible, given that typical neurons are at least one order of magnitude slower? In this talk, an integrate-and-fire model is presented of a neuron in the auditory system of the barn owl. Given a coherent input the model neuron is capable to generate precisely timed output spikes. In order to make the input coherent, delay lines are tuned during an early period of the owls development by an unsupervised learning procedure. This results in an adaptive system which develops a sensitivity to the exact timing of pulses arriving from the left and the right ear, a necessary step for the localization of external sound sourcec and hence prey.

(Abstracts of Posters: see the end of this listing)


Tutorial by Alan F. Murray (University of Edinburgh)
Pulse-Based Computation in VLSI Neural Networks : Fundamentals

This tutorial will present the techniques that underly pulse generation, distribution and arithmetic in VLSI devices. The talk will concentrate on work performed in Edinburgh, but will include references to alternative approaches. Ancillary issues surrounding "neural" computation in analogue VLSI will be drawn out and the tutorial will include a brief introduction to MOSFET circuits and devices.


Alessandro Mortara (Centre Suisse d'Electronique et de Microtechnique, Neuchatel, Switzerland)
Communication and Computation using Spikes in Silicon Perceptive Systems

This presentation deals with the principles, the main properties and some applications of a pulsed communication system adapted to the needs of the analog implementation of perceptive and sensory-motor systems. The interface takes advantage of the fact that activity in perception tasks is often sparsely distributed over a large number of elementary processing units (cells) and facilitates the access to the communication channel to the more active cells. The resulting "open loop" communication architecture can be advantageously be used to set up connections between distant cells on the same chip or point to point connections between cells on different chips. The system also lends itself to the simple circuit implementation of typically biological connectivity patterns such as projection of the activity of one cell on a region (its "projective field") of the next neural processing layer, which can be on a different chip in an actual implementation. Examples of possible applications will be drawn from the fields of vision and sensory-motor loops.


David P.M. Northmore (University of Delaware, USA)
Interpreting Spike Trains with Networks of Dendritic-Tree Neuromorphs

The dendrites of neurons probably play very important signal processing roles in the CNS, allowing large numbers of afferent spike trains to be differentially weighted and delayed, with linear and non-linear summation. Our VLSI neuromorphs capture these essential properties and demonstrate the kinds of computations involved in sensory processing. As recent neurobiology shows, dendrites also play a critical role in learning by back-propagating output spikes to recently active synapses, leading to changes in their efficacy. Using a spike distribution system we are exploring Hebbian learning in networks of neuromorphs.


Alister Hamilton (University of Edinburgh)
Pulse Based Signal Processing for Programmable Analogue VLSI

VLSI implementations of Pulsed Neural Systems often require the use of standard signal processing functions and neural networks in order to process sensory data.

This talk will introduce a new pulse based technique for implementing standard signal processing functions - the Palmo technique.

The technique we have developed is fully programmable, and may be used to implement Field Programmable Mixed Signal Arrays - making it of great interest to the wider electronics community.


Rodney Douglas (ETH Zurich, Switzerland)
A Communications Infrastructure for Neuromorphic Analog VLSI Systems

Analogs of peripheral sensory structures such as retinas and cochleas, and populations of neurons have been successfully implemented on single neuromorphic analog Very Large Scale Integration (aVLSI) chips. However, the amount of computation that can be performed on a single chip is limited. The construction of large neuromorphic systems requires a multi-chip communication framework optimized for neuromorphic aVLSI designs. We have developed one such framework. It is an asynchronous multiplexing communication network based on address event data representation (AER). In AER, analog signals from the neurons are encoded by pulse frequency modulation. These pulses are abstractly represented on a communication bus by the address of the neuron that generated it, and the timing of these address-event communicate analog information. The multiplexing used by the communication framework attempts to take advantage of the greater speed of silicon technology over biological neurons to compensate for more limited direct physical connectivity of aVLSI. The AER provides a large degree of flexibility for routing digital signals to arbitrary physical locations.


POSTERS

Irit Opher and David Horn (Tel Aviv University, Israel)
Arrays of Pulse Coupled Neurons: Spontaneous Activity Patterns and Image Analysis

Arrays of interacting identical pulse coupled neurons can develop coherent firing patterns, such as moving stripes, rotating spirals and expanding concentric rings. We obtain all of them using a novel two variable description of integrate and fire neurons that allows for a continuum formulation of neural fields. One of these variables distinguishes between the two different states of refractoriness and depolarization and acquires topological meaning when it is turned into a field. Hence it leads to a topologic characterization of the ensuing solitary waves. These are limited to point-like excitations on a line and linear excitations, including all the examples quoted above, on a two-dimensional surface. A moving patch of firing activity is not an allowed solitary wave on our neural surface. Only the presence of strong inhomogeneity that destroys the neural field continuity, allows for the appearance of patchy incoherent firing patterns driven by excitatory interactions.

Such a neural manifold can be used for image analysis, performing edge detection and scene segmentation, under different connectivities. Using either DOG or short range synaptic connections we obtain edge detection at times when the total activity of the system runs through a minimum. With generalized Hebbian connections the system develops temporal segmentation. Its separation power is limited to a small number of segments.


Berthold Ruf und Michael Schmitt (Technische Universitaet Graz, Austria)
Self-Organizing Maps of Spiking Neurons Using Temporal Coding

The basic idea of self-organizing maps (SOM) introduced by Kohonen, namely to map similar input patterns to contiguous locations in the output space, is not only of importance to artificial but also to biological systems, e.g. in the visual cortex. However, the standard formulation of the SOM and the corresponding learning rule are not suitable for biological systems. Here we show how networks of spiking neurons can be used to implement a variation of the SOM in temporal coding, which has the same characteristic behavior. In contrast to the standard formulation of the SOM our construction has the additional advantage that the winner among the competing neurons can be determined fast and locally.


Wolfgang Maass and Michael Schmitt (Technische Universitaet Graz, Austria)
On the Complexity of Learning for Networks of Spiking Neurons

In a network of spiking neurons a new set of parameters becomes relevant which has no counterpart in traditional neural network models: the time that a pulse needs to travel through a connection between two neurons (also known as ``delay'' of a connection). It is known that these delays are tuned in biological neural systems through a variety of mechanisms. We investigate the VC-dimension of networks of spiking neurons where the delays are viewed as ``programmable parameters'' and we prove tight bounds for this VC-dimension. Thus we get quantitative estimates for the diversity of functions that a network with fixed architecture can compute with different settings of its delays. It turns out that a network of spiking neurons with k adjustable delays is able to compute a much richer class of Boolean functions than a threshold circuit with k adjustable weights. The results also yield bounds for the number of training examples that an algorithm needs for tuning the delays of a network of spiking neurons. Results about the computational complexity of such algorithms are also given.


Wolfgang Maass and Thomas Natschlaeger (Technische Universitaet Graz, Austria)
Networks of Spiking Neurons Can Emulate Arbitrary Hopfield Nets in Temporal Coding

A theoretical model for analog computation in networks of spiking neurons with temporal coding is introduced and tested through simulations in GENESIS. It turns out that the use of multiple synapses yields very noise robust mechanisms for analog computations via the timing of single spikes. One arrives in this way at a method for emulating arbitrary Hopfield nets with spiking neurons in temporal coding, yielding new models for associative recall of spatio-temporal firing patterns. We also show that it suffices to store these patterns in the efficacies of excitatory synapses. A corresponding layered architecture yields a refinement of the synfire-chain model that can assume a fairly large set of different stable firing patterns for different inputs.


Wolfgang Maass and Berthold Ruf (Technische Universitaet Graz, Austria)

It was previously shown that the computational power of formal models for computation with pulses is quite high if the pulses arriving at a spiking neuron have an approximately linearly rising or linearly decreasing initial segment. This property is satisfied by common models for biological neurons. On the other hand several implementations of pulsed neural nets in VLSI employ pulses that have the shape of step functions. We analyse the relevance of the shape of pulses for the computational power of formal models for pulsed neural nets. It turns out that the computational power is significantly higher if one employs pulses with a linearly increasing or decreasing segment.


Ulrich Roth and Tim Schoenauer (Technische Universitaet Berlin, Germany)

For image processing or to model brain areas with complex integrate-and-fire neurons, the simulation of networks consisting of several millions of spiking neurons is desirable. Existing hardware platforms are unable to perform the simulation of such complex networks in reasonable time. Therefore, a neurocomputer for spiking neural networks (NESPINN) has been designed and is about to be realized. The entire system comprises 16 similar boards, which communicate via a VME-bus. A network of up to 128K neurons is computed per board in real-time. Each board consists of two connection units, two weight units and an ASIC with four parallel processing units in a SIMD/dataflow architecture . Also, a simulation tool "SimSpinn" written in Java as an interface to existing platforms as well as to the NESPINN-system has been developed. Features of this simulation engine as well as an outlook for a second generation of a neuroaccelerator will be given. A second generation is currently planned at the Technical University of Berlin and this outlook shall encourage comments and suggestions for the new architecture.

Copyright © Isaac Newton Institute