Skip to content

Workshop Programme

for period 18 - 20 June 2008

Inference and Estimation in Probabilistic Time-Series Models

18 - 20 June 2008

Timetable

Wednesday 18 June
13:00-14:00 Registration
14:00-15:00 Godsill, S (Cambridge)
  Sequential inference for dynamically evolving groups of objects Sem 1
 

In this talk I will describe recent work on tracking for groups of objects. The aim of the process is to infer evolving groupings of moving objects over time, including group affiliations and individual object states. Behaviour of group objects is modelled using interacting multiple object models, in which individuals attempt stochastically to adjust their behaviour to be `similar' to that of other objects in the same group; this idea is formalised as a multi-dimensional stochastic differential equation for group object motion. The models are estimated algorithmically using sequential Markov chain Monte Carlo approximations to the filtering distributions over time, allowing for more complex modelling scenarios than the more familiar importance-sampling based Monte Carlo filtering schemes. Examples will be presented from GMTI data trials for multiple vehicle motion.

Related Links

 
15:00-15:30 Tea
15:30-16:10 Cai, Y (Plymouth)
  A Bayesian method for non-Gaussian autoregressive quantile function time series models Sem 1
 

Many time series in economics and finance are non-Gaussian. In this paper, we propose a Bayesian approach to non-Gaussian autoregressive quantile function time series models where the scale parameter of the models does not depend on the values of the time series. This approach is parametric. So we also compare the proposed parametric approach with the semi-parametric approach (Koenker, 2005). Simulation study and applications to real time series show that the method works very well.

 
16:10-16:50 Luo, X (Oxford)
  State estimation in high dimensional systems: the method of the ensemble unscented Kalman filter Sem 1
 

The ensemble Kalman filter (EnKF) is a Monte Carlo implementation of the Kalman filter, which is often adopted to reduce the computational cost when dealing with high dimensional systems. In this work, we propose a new EnKF scheme based on the concept of the unscented transform, which therefore will be called the ensemble unscented Kalman filter (EnUKF). Under the assumption of Gaussian distribution of the estimation errors, it can be shown analytically that, the EnUKF can achieve more accurate estimations of the ensemble mean and covariance than the ordinary EnKF. Therefore incorporating the unscented transform into an EnKF may benefit its performance. Numerical experiments conducted on a $40$-dimensional system support this argument.

 
16:50-17:30 Whiteley, N (Cambridge)
  A modern perspective on auxiliary particle filters Sem 1
 

The auxiliary particle filter (APF) is a popular algorithm for the Monte Carlo approximation of the optimal filtering equations of state space models. This talk presents a summary of several recent developments which affect the practical implementation of this algorithm as well as simplifying its theoretical analysis. In particular, an interpretation of the APF, which makes use of an auxiliary sequence of distributions, allows the approach to be extended to more general Sequential Monte Carlo algorithms. The same interpretation allows existing theoretical results for standard particle filters to be applied directly. Several non-standard implementations and applications will also be discussed.

 
17:30-18:30 Poster Session
Thursday 19 June
09:00-09:40 Reisen, VA (Universidade Federal do Espirito Santo)
  Estimating multiple fractional seasonal long-memory parameter Sem 1
 

This paper explores seasonal and long-memory time series properties by using the seasonal fractionally ARIMA model when the seasonal data has two seasonal periods, namely, s1 and s2. The stationarity and invertibility parameter conditions are established for the model studied. To estimate the memory parameters, the method given in Reisen, Rodrigues and Palma (2006 a,b), which is a variant of the technique proposed in Geweke and Porter-Hudak (1983) (GPH), is generalized here to deal with a time series with multiple seasonal fractional long-memory parameters. The accuracy of the method is investigated through Monte Carlo experiments and the good performance of the estimator indicates that it can be an alternative procedure to estimate seasonal and cyclical long-memory time series data.

 
09:40-10:20 Shen, Y (Aston)
  Variational Markov Chain Monte Carlo for inference in partially observed stochastic dynamic systems Sem 1
 

In this paper, we develop set of novel Markov chain Monte Carlo algorithms for Bayesian inference in partially observed non-linear diffusion processes. The Markov chain Monte Carlo algorithms we develop herein use an approximating distribution to the true posterior as the proposal distribution for an independence sampler. The approximating distribution utilises the posterior approximation computed using the recently developed variational Gaussian Process approximation method. Flexible blocking strategies are then introduced to further improve the mixing, and thus the efficiency, of the Markov chain Monte Carlo algorithms. The algorithms are tested on two cases of a double-well potential system. It is shown that the blocked versions of the variational sampling algorithms outperform Hybrid Monte Carlo sampling in terms of computational efficiency, except for cases where multi-modal structure is present in the posterior distribution.

 
10:20-11:00 Turner, R (University College London)
  Two problems with variational expectation maximisation for time-series models Sem 1
 

Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet requiring fewer computational resources than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well known compactness property is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.

Related Links

 
11:00-11:30 Coffee
11:30-12:30 Opper, M (Technische Universität Berlin)
  Approximate Inference for Continuous Time Markov Processes Sem 1
 

Continuous time Markov processes (such as jump processes and diffusions) play an important role in the modelling of dynamical systems in many scientific areas.

In a variety of applications, the stochastic state of the system as a function of time is not directly observed. One has only access to a set of nolsy observations taken at a discrete set of times. The problem is then to infer the unknown state path as best as possible. In addition, model parameters (like diffusion constants or transition rates) may also be unknown and have to be estimated from the data. While it is fairly straightforward to present a theoretical solution to these estimation problems, a practical solution in terms of PDEs or by Monte Carlo sampling can be time consuming and one is looking for efficient approximations. I will discuss approximate solutions to this problem such as variational approximations to the probability measure over paths and weak noise expansions.

 
12:30-13:30 Lunch at Wolfson Court/Churchill College
14:00-15:00 Singh, S (Cambridge)
  Recent applications of spatial point processes to multiple-object tracking Sem 1
 

The Point Process framework is natural for the multiple-object tracking problem and is increasingly playing a central role in the derivation of new inference schemes. Interest in this framework is largely due to the derivation of a filter that propagates the first moment of a Markov-in-time Spatial Point Processes observed in noise by Ronald Mahler. Since then there have been several extensions to this result with accompanying numerical implementations based on Sequential Monte Carlo. These results will be presented.

 
15:00-15:20 Tea
15:20-16:00 Kondor, R (University College London)
  Multi-object tracking with representations of the symmetric group Sem 1
 

We present a framework for maintaining and updating a time varying distribution over permutations matching tracks to real world objects. Our approach hinges on two insights from the theory of harmonic analysis on noncommutative groups. The first is that it is sufficient to maintain certain “low frequency” Fourier components of this distribution. The second is that marginals and observation updates can be efficiently computed from such components by extensions of Clausen’s FFT for the symmetric group.

Related Links

 
16:00-17:00 Williams, C (Edinburgh)
  Factorial switching linear dynamical systems for physiological condition monitoring Sem 1
 

Condition monitoring often involves the analysis of measurements taken from a system which "switches" between different modes of operation in some way. Given a sequence of observations, the task is to infer which possible condition (or "switch setting") of the system is most likely at each time frame. In this paper we describe the use of factorial switching linear dynamical models for such problems. A particular advantage of this construction is that it provides a framework in which domain knowledge about the system being analysed can easily be incorporated.

We demonstrate the flexibility of this type of model by applying it to the problem of monitoring the condition of a premature baby receiving intensive care. The state of health of a baby cannot be observed directly, but different underlying factors are associated with particular patterns of measurements, e.g. in the heart rate, blood pressure and temperature. We use the model to infer the presence of two different types of factors: common, recognisable regimes (e.g. certain artifacts or common physiological phenomena), and novel patterns which are clinically significant but have unknown cause. Experimental results are given which show the developed methods to be effective on real intensive care unit monitoring data.

Joint work with John Quinn and Neil McIntosh

Related Links

 
17:00-17:30 Roberts, S (Oxford)
  Bayesian Gaussian process models for multi-sensor time-series prediction Sem 1
 

We propose a powerful prediction algorithm built upon Gaussian processes (GPs). They are particularly useful for their flexibility, facilitating accurate prediction even in the absence of strong physical models. GPs further allow us to work within a completely Bayesian framework. As such, we show how the hyperparameters of our system can be marginalised by use of Bayesian Monte Carlo, a principled method of approximate integration. We employ the error bars of the GP's prediction as a means to select only the most informative observations to store. This allows us to introduce an iterative formulation of the GP to give a dynamic, on-line algorithm. We also show how our error bars can be used to perform active data selection, allowing the GP to select where and when it should next take a measurement. We demonstrate how our methods can be applied to multi-sensor prediction problems where data may be missing, delayed and/or correlated. In particular, we present a real network of weather sensors as a testbed for our algorithm.

 
17:30-18:30 Wine reception/poster session
Friday 20 June
09:00-09:40 McLachlan, GJ (Queensland)
  Clustering of time course gene-expression data via mixture regression models Sem 1
 

In this paper, we consider the use of mixtures of linear mixed models to cluster data which may be correlated and replicated and which may have covariates. This approach can thus be used to cluster time series data. For each cluster, a regression model is adopted to incorporate the covariates, and the correlation and replication structure in the data are specified by the inclusion of random effects terms. The procedure is illustrated in its application to the clustering of time-course gene expression data.

 
09:40-10:20 Titsias, MK (Manchester)
  Markov chain Monte Carlo algorithms for Gaussian processes Sem 1
 

We discuss Markov chain Monte Carlo algorithms for sampling functions in Gaussian process models. A first algorithm is a local sampler that iteratively samples each local part of the function by conditioning on the remaining part of the function. The partitioning of the domain of the function into regions is automatically carried out during the burn-in sampling phase. A more advanced algorithm uses control variables which are auxiliary function values that summarize the properties of the function. At each iteration, the algorithm proposes new values for the control variables and then generates the function from the conditional Gaussian process prior. The control input locations are found by minimizing the total variance of the conditional prior. We apply these algorithms to estimate non-linear differential equations in Systems Biology.

 
10:20-11:00 Aston, J (Warwick)
  Is that really the pattern we're looking for? Bridging the gap between statistical uncertainty and dynamic programming algorithms Sem 1
 

Two approaches to statistical pattern detection, when using hidden or latent variable models, are to use either dynamic programming algorithms or Monte Carlo simulations. The first produces the most likely underlying sequence from which patterns can be detected but gives no quantification of the error, while the second allows quantification of the error but is only approximate due to sampling error. This paper describes a method to determine the statistical distributions of patterns in the underlying sequence without sampling error in an efficient manner. This approach allows the incorporation of restrictions about the kinds of patterns that are of interest directly into the inference framework, and thus facilitates a true consideration of the uncertainty in pattern detection.

 
11:00-11:30 Coffee
11:30-12:30 Moulines, E (CNRS)
  Adaptive Monte Carlo Markov Chains Sem 1
 

In this talk, we present in a common unifying framework several adaptive Monte Carlo Markov chain algorithms (MCMC) that have been recently proposed in the literature. We prove that under a set of verifiable conditions, ergodic averages calculated from the output of a so-called adaptive MCMC sampler converge to the required value and can even, under more stringent assumptions, satisfy a central limit theorem. We prove that the conditions required are satisfied for the Independent Metropolis-Hastings algorithm and the Random Walk Metropolis algorithm with symmetric increments. Finally we propose an application of these results to the case where the proposal distribution of the Metropolis-Hastings update is a mixture of distributions from a curved exponential family. Several illustrations will be provided.

 
12:30-13:30 Lunch at Wolfson Court/Churchill College
14:00-15:00 Papaspiliopoulos, O (Universitat Pompeu Fabra)
  A methodological framework for Monte Carlo estimation of continuous-time processes Sem 1
 

In this talk I will review a mathodological framework for the estimation of partially observed continuous-time processes using Monte Carlo methods. I will presente different types of data structures and frequency regimes and will focus on unbiased (with respect to discretization errors) Monte Carlo methods for parameter estimation and particle filtering of continuous-time processes. An important component of the methodology is the Poisson estimator and I will discuss some of its properties. I will also present some results on the parameter estimation using variations of the smooth particle filter which exploit the graphical model structure inherent in partially observed continuous-time Markov processes.

 
15:00-15:30 Tea
15:30-16:10 Sykulski, A; Olhede, SC (Imperial/UCL)
  High frequency variability and microstructure bias Sem 1
 

Microstructure noise can substantially bias the estimation of volatility of an Ito process. Such noise is inherently multiscale, causing eventual inconsistency in estimation as the sampling rate becomes more frequent. Methods have been proposed to remove this bias using subsampling mechanisms. We instead take a frequency domain approach and advocate learning the degree of contamination from the data. The volatility can be seen as an aggregation of contributions from many different frequencies. Having learned the degree of contamination allows us to frequency-by-frequency correct these contributions and calculate a bias-corrected estimator. This procedure is fast, robust to different signal to microstructure scenarios, and is also extended to the problem of correlated microstructure noise. Theory can be developed as long as the Ito process has harmonizable increments, and suitable dynamic spectral range.

 
16:10-17:10 Ghahramani, Z (Cambridge)
  Nonparametric Bayesian times series models: infinite HMMs and beyond Sem 1
 

Hidden Markov models (HMMs) are one of the most widely used statistical models for time series. Traditionally, HMMs have a known structure with a fixed number of states and are trained using maximum likelihood techniques. The infinite HMM (iHMM) allows a potentially unbounded number of hidden states, letting the model use as many states as it needs for the data (Beal, Ghahramani and Rasmussen 2002). Teh, Jordan, Beal and Blei (2006) showed that a form of the iHMM could be derived from the Hierarchical Dirichlet Process, and described a Gibbs sampling algorithm based on this for the iHMM. I will talk about recent work we have done on infinite HMMs. In particular: we now have a much more efficient inference algorithm based on dynamic programming, called 'Beam Sampling', which should make it possible to apply iHMMs to larger problems. We have also developed a factorial version of the iHMM which makes it possible to have an unbounded number of binary state variables, and can be thought of as a time-series generalization of the Indian buffet process.

Joint work with Jurgen van Gael (Cambridge), Yunus Saatci (Cambridge) and Yee Whye Teh (Gatsby Unit, UCL).

Related Links

 

Back to top ∧