skip to content

Timetable (ICPW01)

Inference for Change-Point and Related Processes

Monday 13th January 2014 to Friday 17th January 2014

Monday 13th January 2014
10:00 to 11:30 Registration
11:30 to 12:30 Locally Stationary Processes - An Overview
The talk gives an overview on locally stationary processes. After summarizing the basic models and techniques we discuss in more detail the likelihood theory for locally stationary processes and empirical process theory for the theoretical treatment of statistical inference. We also present a Taylor type expansion for locally stationary processes in terms of so called derivative processes and show how these techniques can be used for investigating the statistical properties of estimators.
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 15:00 Change-point detection and analysis
Change-point detection had its origins almost sixty years ago in the work of Page, Shiryayev, and Lorden, who focused on sequential detection of a change-point in an observed process. The observations were typically regarded as measurements of the quality of a continuous production process, and the change-point indicated a deterioration in quality that must be detected and corrected. Recent applications, especially in genetics and in fMRI analysis, have led to a variety of new fixed sample change-point problems and other problems of local signal detection having the same essential structure. In this talk I will review this history and common features of likelihood based approaches.
15:00 to 15:30 Afternoon Tea
15:30 to 16:30 Detection and Exploitation of Nonstationarities in Time Series Data
In recent years, with disciplines becoming increasingly quantitative, time series exhibiting nonstationary characteristics have proliferated. Such proliferation is partly due to our improved ability to collect longer series enabling the detection of nonstationarities and nonlinearities that could not be found before. Time series are more complex too, not only are we faced by classical univariate and multivariate series but time series on graphs, manifolds and fractal-like domains. This talk provides an overview on the detection of nonstationarities and how to exploit them to provide a richer understanding of their underlying characteristics. We exhibit these techniques on a variety of data sets arising in earth sciences, economics and epidemics.
16:30 to 18:00 Welcome Wine Reception and Poster Session
Tuesday 14th January 2014
09:25 to 09:30 Welcome from John Toland (INI Director) INI 1
09:30 to 10:00 Detecting smooth changes in locally stationary processes
In a wide range of time series applications, the stochastic properties of the data change over time. It is often realistic to assume that the properties are approximately the same over short time periods and then gradually start to vary. This behaviour is well modelled by locally stationary processes. In this talk, we investigate the question how to estimate time spans where the stochastic features of a locally stationary time series are the same. We set up a general method which allows to deal with a wide variety of features including the mean, covariances, higher moments and the distribution of the time series under consideration.
10:00 to 10:30 Modeling the Evolution of Neurophysiological Signals
In recent years, research into analyzing brain signals has dramatically increased, and the rich data sets being collected require more advanced statistical tools and developments in order to perform proper statistical analyses. Consider an experiment where a stimulus is presented many times, and after each stimulus presentation, time series data is collected. The time series data exhibit nonstationary characteristics. Moreover, across stimuli presentation the time series are non-identical and their spectral properties may even change over the course of the experiment. In this talk, we will look at a novel approach for analyzing nonidentical nonstationary time series data. We consider two sources of nonstationarity: 1) within each replicate and 2) across the replications, so that the spectral properties of the time series data are evolving over time within a replicate, and are also evolving over the course of the experiment. We extend the locally stationary time series model t o account for replicated data, with potentially correlated replicates. We analyze a local field potential data set to study how the spectral properties of the local field potentials obtained from the nucleus accumbens and the hippocampus of a monkey evolve over the course of a learning association experiment.
10:30 to 11:00 Robust estimation of time-varying correlations
Co-author: Hernando Ombao (University of California Irvine)

In this paper we propose a new non-parametric method to estimate the correlation matrix of a multivariate non-stationary time series.

Kernel-type smoothers of the cross-products are based on the same bandwidth for all the variances and covariances. Though this approach guarantees positive semi-definiteness of the estimated correlation matrix, the use of one common bandwidth for all the entries might be restrictive, as variances and covariances are in general characterized by different degrees of smoothness.

On the other hand, a kernel-type estimator based on different smoothing parameters for variances and covariances is not necessarily bounded between minus one and one. As a consequence, the resulting correlation matrix is not necessarily positive semi-definite.

The estimator we propose in this paper is based on local polynomial smoothing of an unbiased estimate of the local correlation, which does not require to distinguish between numerator and denominator. Our new approach allows for different smoothing bandwidths of the different entries of the time-varying correlations, while preserving positive semi-definiteness.

11:00 to 11:30 Morning Coffee
11:30 to 12:15 Multiscale Change Point Inference
Statistical MUltiscale Change point Estimation (SMUCE) is an inference tool for estimation and confidence statements about a change-point function and its main characteristics location, size and number of jumps. SMUCE detects these features on all scales in an optimal fashion. Fast computation of SMUCE via dynamic programming is addressed and data from ion channel recordings, photo emission spectroscopy and CGH array analysis will be analyzed.
12:15 to 13:30 Lunch at Wolfson Court
13:30 to 14:10 Wild Binary Segmentation for multiple change-point detection
We propose a new technique, called Wild Binary Segmentation (WBS), for consistent estimation of the number and locations of multiple change-points in data. We assume that the number of change-points can increase to infinity with the sample size. Due to a certain random localisation mechanism, WBS works even for very short spacings between the change-points, unlike standard Binary Segmentation. On the other hand, despite its use of localisation, WBS does not require the choice of a window or span parameter, and does not lead to significant increase in computational complexity. WBS is also easy to code. We describe two types of stopping criteria for WBS: one based on thresholding and another based on what we call the "extended Schwarz Information Criterion". We provide default recommended values of the parameters of the procedure and, in an extensive simulation study, show that it offers very good practical performance in comparison with the state of the art. We provi de a new proof of consistency of Binary Segmentation with improved rates of convergence, as well as a corresponding result for WBS.
14:10 to 14:50 High Dimensional Stochastic Regression with Latent Factors,Endogeneity and Nonlinearity
We consider a multivariate time series model which represents a high dimensional vector process as a sum of three terms: a linear regression of some observed regressors, a linear combination of some latent and serially correlated factors, and a vector white noise. We investigate the inference without imposing stationary conditions on the target multivariate time series, the regressors and the underlying factors. Furthermore we deal with the the endogeneity that there exist correlations between the observed regressors and the unobserved factors. We also consider the model with nonlinear regression term which can be approximated by a linear regression function with a large number of regressors. The convergence rates for the estimators of regression coefficients, the number of factors, factor loading space and factors are established under the settings when the dimension of time series and the number of regressors may both tend to infinity together with the sample size. The proposed method is illustrated with both simulated and real data examples.
14:50 to 15:30 H Dette (Ruhr-Universität Bochum)
Detection of multiple structural breaks in multivariate time series
We propose a new nonparametric procedure for the detection and estimation of multiple structural breaks in the autocovariance function of a multivariate (second-order) piecewise stationary process, which also identifies the components of the series where the breaks occur. The new method is based on a comparison of the estimated spectral distribution on different segments of the observed time series and consists of three steps: it starts with a consistent test, which allows to prove the existence of structural breaks at a controlled type I error. Secondly, it estimates sets containing possible break points and finally these sets are reduced to identify the relevant structural breaks and corresponding components which are responsible for the changes in the autocovariance structure. In contrast to all other methods which have been proposed in the literature, our approach does not make any parametric assumptions, is not especially designed for detecting one single change point and addresses the problem of multiple structural breaks in the autocovariance function directly with no use of the binary segmentation algorithm. We prove that the new procedure detects all components and the corresponding locations where structural breaks occur with probability converging to one as the sample size increases and provide data-driven rules for the selection of all regularization parameters. The results are illustrated by analyzing financial returns, and in a simulation study it is demonstrated that the new procedure outperforms the currently available nonparametric methods for detecting breaks in the dependency structure of multivariate time series.
15:30 to 16:00 Afternoon Tea
16:00 to 16:30 Z Zhou (University of Toronto)
Heteroscedasticity and Autocorrelation Robust Structural Change Detection
The assumption of (weak) stationarity is crucial for the validity of most of the conventional tests of structure change in time series. Under complicated non-stationary temporal dynamics, we argue that traditional testing procedures result in mixed structural change signals of the first and second order and hence could lead to biased testing results. We propose a simple and unied bootstrap testing procedure which provides consistent testing results under general forms of smooth and abrupt changes in the temporal dynamics of the time series. Monte Carlo experiments are performed to compare our testing procedure to various traditional tests. Our robust bootstrap test is applied to testing changes in an environmental time series and our procedure is shown to provide more reliable results than the conventional tests.
16:30 to 17:00 Optimal Detection of a Hidden Target
Imagine that you are observing a sample path of the continuous process started at zero and that you wish to detect when this sample path reaches a level that is not directly observable. Situations of this type occur naturally in many applied problems and there is a whole range of hypotheses that can be introduced to study various particular aspects of the problem. We will present a review of the recent results with a view to inference for change-points.
Wednesday 15th January 2014
09:30 to 10:00 M Nunes (Lancaster University)
Analysis of time series observed on networks
In this talk we consider analysis problems for time series that are observed at nodes of a large network structure. Such problems commonly appear in a vast array of fields, such as environmental time series observed at different spatial locations or measurements from computer system monitoring. The time series observed on the network might exhibit different characteristics such as nonstationary behaviour or strong correlation, and the nodal series evolve according to the inherent spatial structure.

The new methodology we develop hinges on reducing dimensionality of the original data through a change of basis. The basis we propose is a second generation wavelet basis which operates on spatial structures. As such, the (large) observed data is replaced by data over a reduced network topology. We give examples of the potential of this dimension reduction method for time series analysis tasks. This is joint work with Marina Knight (University of York) and Guy Nason (University of Bristol).

10:00 to 10:30 C-D Fuh ([National Central University, Taiwan])
Decentralized Quickest Change Detection in Hidden Markov Models for Sensor Networks
The decentralized quickest change detection problem is studied in sensor networks, where a set of sensors take observations from a hidden Markov model (HMM) and send sensor messages to a fusion center, which makes a final decision when observations are stopped. It is assumed that the parameter $\theta$ in the HMM model changes from $\theta_0$ to $\theta_1$ at some unknown time. The problem is to determine the policies at the sensor and fusion center levels to jointly optimize the detection delay subject to the average run length (ARL) to false alarm constraint. The primary goal of this paper is to investigate how to choose the best binary stationary quantizers from the both theoretical and computational viewpoints when a CUSUM-type scheme is used at the fusion center. Further research is also given.
10:30 to 11:00 Nonparametric change-point detection with sparse alternatives
We consider the problem of detecting the change in mean in a sequence of Gaussian vectors. We assume that the change happens only in some of the components of the vector. We construct a nonparametric testing procedure that is adaptive to the number of changing components. Under high-dimensional assumptions we obtain the detection boundary and show the rate optimality of the test.
11:00 to 11:30 Morning Coffee
11:30 to 12:15 Modelling multivariate nonstationarity
Co-authors: Adam Sykulski (UCL), Jonathan Lilly (NWRA), Jeffrey Early (NWRA)

Nonstationarity, like all non-properties, is hard to pin down precisely, and to model sufficiently flexibly for realism, but at the same time model in a sufficiently constrained fashion to allow for good inference. Modelling is inevitably time or frequency domain, where the two branches of thinking are traditionally linked via the local spectrum, or another bilinear representation of the data.

The resolution in the representation is constrained by the choice of representation. There are of course many alternatives to modelling the local Fourier transform, but these have been mainly parametric or have been developed for a specific application.

A problem in general is chosing a representation that suits analysis of more than one series. We shall focus on how our notion of nonstationarity must change when thinking of such observations, focussing on what features are present in bivariate series, that cannot be found in univariate observations.

12:15 to 13:30 Lunch at Wolfson Court
13:30 to 14:00 Change-point tests based on estimating functions
Many classical change-point tests are based on cumulative sums of estimating functions, where the most prominent example are quasi maximum likelihood scores. Examples include testing for changes in the location model, continuous linear and non-linear autoregressive time series as well as most recently changes in count time series. While classic theory deals with offline procedures where the full data set has been observed before a statistical decision about a change-point is made, the same principles can be used in sequential testing. The latter has gained some increased interest in the last decade, where initial parameter estimation is based on some historic data set with no change-point, before cumulative sum charts are used to monitor newly arriving data. In such a setup, asymptotics are carried out with the size of the historic data set increasing to infinity. In applications such a data set will typically exist as usually at least some data is collected before any reason able statistical inference can be made. In this talk we explain the underlying ideas and extract regularity conditions under which asymptotics both under the null hypothesis as well as alternative can be derived. We will illustrate the usefulness using different examples that have partly already been discussed in the literature.
14:00 to 14:30 Inference for multiple change-points in time series via likelihood ratio scan statistics
We propose a Likelihood Ratio Scan Method (LRSM) for multiple change-points estimation in piecewise stationary processes. Using the idea of scan statistics, the computationally infeasible global multiple change-points estimation problem is reduced to a number of single change-point detection problems in various local windows. The computation can be performed efficiently with order $O(n\log n)$. Consistency for the estimated number and locations of the change-points are established. Moreover, a procedure for constructing confidence intervals for each of the change-point is developed. Simulation experiments show that LRSM outperforms other methods when the series length is large and the number of change-points is relatively small.
14:30 to 15:00 Non-stationary functional time series: an application to electricity supply and demand
One main feature of electricity spot prices is the frequent occurrence of spikes, that is, of periods of extreme prices that are typically short-lived and during which the spot price exceeds its normal level many times over. Such spikes occur usually if the supply and demand curves that determine the spot price meet in their steeper parts. For a better assessment of the risk in such situation we propose to include the complete supply and demand curves to forecast spot prices.

We model the spread between the supply and demand curve as a functional time series. The approach is based on a decomposition into eigenfunctions and model eigenvalues by a dynamic factor model. We find that the form of the spread does not remain stable over time but mostly evolves slowly over time. There are, however, few marked time points of sudden changes in the functional form of the spread. The project is ongoing work and the talk will cover some preliminary results.

15:00 to 15:30 Afternoon Tea
15:30 to 16:00 N Pavlidis (Lancaster University)
High-Dimensional Incremental Divisive Clustering under Population Drift
Clustering is a central problem in data mining and statistical pattern recognition with a long and rich history. The advent of Big Data has introduced important challenges to existing clustering methods in the form of high-dimensional, high-frequency, time-varying streams of data. Up-to-date research on Big Data clustering has been almost exclusively focused on addressing individual aspects of the problem in isolation, largely ignoring whether and how the proposed methods can be extended to address the overall problem. We will discuss an incremental divisive clustering approach for high-dimensional data that has storage requirements that are low and more importantly independent of the stream size, and can identify changes in the population distribution that require a revision of the clustering result.
16:00 to 16:30 Shape smoothing (and what I hope to get from the Newton change-point program)
This talk will present some ideas for smoothing manifolds that have been observed as discrete meshes subject to noise. We will discuss the potential to create wavelet-type bases informed by the geometry of the manifold. As opposed to global Laplace-Beltrami eigenfunctions, localised bases can be found through the lifting scheme. These bases should allow a parsimonious representation of the co-ordinate functions and allow denoising of the original shape via thresholding.

I will also discuss some hopes that I have for the Newton Institute program and areas in change-point inference I am interested in working on.

16:30 to 17:00 Precision of Disorders Detection
The lecture presents the results on the problem of change point detection for Markov processes generalizing the results contained in the publications [2], [4], [3] and [1]. The short description are as follows. A random sequence having segments being the homogeneous Markov processes is registered. Each segment has his own transition probability law and the length of the segment is unknown and random. The transition probabilities of each process are known and joint a priori distribution of the disorder moments is given. The detection of the disorder rarely is precise. The decision maker accepts some deviation in estimation of the disorder moment. In the models taken into account the aim is to indicate the change point with xed, bounded error with maximal probability. The case with various precision for over and under estimation of this point is analysed including situation when the disorder does not appears with positive probability is also included. The observed sequence, when the change point is known, has the Markov properties. The results explain the structure of optimal detector in various circumstances and shows new details of the solution construction as well insignicantly extends range of application. The motivation for this investigation is the modelling of the attacks in the node of networks. The objectives is to detect one of the attack immediately or in very short time before or after it appearance with highest probability. The problem is reformulated to optimal stopping of the observed sequences. The detailed analysis of the problem is presented to show the form of optimal decision function.

Key Words: disorder problem, sequential detection, optimal stopping, multi-variate optimization

19:30 to 22:00 Conference Dinner at Cambridge Union Society hosted by Cambridge Dining Company
Thursday 16th January 2014
09:30 to 10:00 P Fearnhead (Lancaster University)
Computationally Efficient Algorithms for Detecting Changepoints
We consider algorithms that can obtained the optimal segmentation of data under approaches such as penalised likelihood. The penalised likelihood criteria requires the user to specify a penalty value, and the choice of penalty will affect the number of changepoints that are detected. We show how it is possible to obtain the optimal segmentation for all penalty values across a continuous range. The computational complexity of this approach can linear in the number of data points, and linear in the difference in the number of changepoints between the optimal segmentations for the smallest and largest penalty values. The algorithm can be used to find optimal segmentations under the minimum description length criteria in a much more efficient manner than using the segment neighbourhood algorithm.
10:00 to 10:30 An algorithm to segment count data using a binomial negative model
We consider the problem of segmenting a count data profile. We developed an algorithm to recover the best (w.r.t the likelihood) segmentations in 1 to K_{max} segments. We prove that the optimal segmentation can be recovered using a compression scheme which reduces the time complexity. The compression is particularly efficient when the signal has large plateaus. We illustrate our algorithm on next generation sequencing data.
10:30 to 11:00 Simultaneous break point detection and variable selection in quantile regression models
This talk discusses new model fitting techniques for quantiles of an observed data sequence, including methods for data segmentation and variable selection. The main contribution, however, is in providing a means to perform these two tasks simultaneously. This is achieved by matching the data with the best-fitting piecewise quantile regression model, where the fit is determined by a penalization derived from the minimum description length principle. The resulting optimization problem is solved with the use of genetic algorithms. The proposed, fully automatic procedures are, unlike traditional break point procedures, not based on repeated hypothesis tests, and do not require, unlike most variable selection procedures, the specification of a tuning parameter. Theoretical large-sample properties are derived. Empirical comparisons with existing break point and variable selection methods for quantiles indicate that the new procedures work well in practice.
11:00 to 11:30 Morning Coffee
11:30 to 12:15 Constructing adaptive interference-reduced Wigner-Ville spectral estimators of non-stationary time series
Co-authors: Florent Autin (Université d'Aix-Marseille 1), Gerda Claeskens (KULeuven), Rainer von Sachs (UCLouvain)

In this talk we propose estimators of the time-frequency spectrum of a (zero mean) non-stationary time series with second order structure which varies across time. It is obtained by smoothing the empirical Wigner-Ville (WV) spectrum (Martin and Flandrin, 1985) which is a highly localized time-frequency spectrum. Using the empirical WV avoids prior time-frequency segmentation (such as for the segmented periodogram (Schneider and von Sachs, 1996)) nevertheless it suffers from low and heterogeneous signal-to-noise ratios and from severe interferences. In addition, the associated time-frequency spectrum is best modeled as an anisotropic object with locally varying smoothness in both time and frequency directions (Neumann and von Sachs, 1997). All this make smoothing very challenging. Our approach is to project the empirical WV data onto a specifically designed hyperbolic wavelet basis (Autin et al, 2013) and to use a tree-structured thresholding (Autin et al, 2011, 2013) under co nstraints inspired notably by the Heisenberg's uncertainty principle. Such approach is expected to ensure an adaptive time-frequency representation and to reduce the cross-interferences of the WV spectrum.

12:15 to 13:30 Lunch at Wolfson Court
13:30 to 14:15 Bayesian inference in continuous time jump processes
In this talk I will discuss recent advances in inference for continuous time processes with random changepoints or jumps. I will discuss cases with finite numbers of jumps, modelled within a jump-diffusion or piecewise deterministic processed framework, then go on to describe processes with almost surely infinite numbers of jumps on finite intervals, focussing on recent developments for alpha-stable Levy processes. Methodology is Bayesian, using computational methods related to Markov chain Monte Carlo and particle filtering.
14:15 to 15:00 An Automated Statistician which learns Bayesian nonparametric models of time series data
I will describe the "Automated Statistician", a project which aims to automate the exploratory analysis and modelling of data. Our approach starts by defining a large space of related probabilistic models via a grammar over models, and then uses Bayesian marginal likelihood computations to search over this space for one or a few good models of the data. The aim is to find models which have both good predictive performance, and are somewhat interpretable. Our initial work has focused on the learning of unknown nonparametric regression functions, and on learning models of time series data, both using Gaussian processes. Once a good model has been found, the Automated Statistician generates a natural language summary of the analysis, producing a 10-15 page report with plots and tables describing the analysis. I will focus in particular on the modelling of time series, including how we handle change points in Gaussian process models. I will also discuss challenges su ch as: how to trade off predictive performance and interpretability, how to translate complex statistical concepts into natural language text that is understandable by a numerate non-statistician, and how to integrate model checking.

This is joint work with James Lloyd and David Duvenaud (Cambridge) and Roger Grosse and Josh Tenenbaum (MIT).

15:00 to 15:30 Afternoon Tea
15:30 to 15:50 F Lindner ([Karlsruhe Institute of Technology])
Local moving Fourier based bootstrapping
In applications, many time series while not being globally stationary are locally well approximated by stationary time series models such as autoregressive processes. Such time series can be well approximated by the class of locally stationary processes as introduced by Dahlhaus (1997) allowing for a rigorous asymptotic theory. In this work we extend existing Fourier based bootstrap methods to locally stationary time series. Using a local moving Fourier transform we can do this globally for the full locally stationary time series as opposed to only local bootstrap versions which can only pointwise mimic the local stationary approximation. This allows us to obtain locally stationary bootstrap processes in the time domain. We derive asymptotic properties of the corresponding Fourier transform based on which we can show that the bootstrap time series has asymptotically the same covariance structure as the original locally stationary time series. We will then illustrate the behaviour with some simulations.
15:50 to 16:10 M Little ([Aston University/MIT])
Inferring change points in signal levels through deterministic minimization of a generalized global functional
Abrupt level change points are ubiquitous. Knowing the change points and levels of a time series, is critical to many practical signal analysis problems in science and engineering. For this, and other reasons, the problem of detecting level shifts, first studied in the 1940's in process control, is of enduring interest. In this talk I will detail a set of simple, novel, generalized, deterministic nonlinear algorithms for this problem. These algorithms are based on a global functional which, when minimized, finds the maximum a-posteriori location of the change points and values of the levels. This global functional approach subsumes some well-known algorithms for this problem that have been developed in digital image processing contexts, and also folds in several algorithms from statistical machine learning that have hitherto been seen as distinct. The algorithms are computationally simple, and many are convex optimization problems for which standard, fast implementations are available.
16:10 to 16:30 AM Sykulski (University College London)
Locally-stationary modelling of oceanographic spatiotemporal data
Stochastic modelling of oceanographic spatiotemporal data provides useful summaries of key physical characteristics observed from the ocean surface. Such summaries are useful in developing global climate models and our ability to respond to environmental disasters such as oil spills. Ocean surface data is typically collected in the form of Lagrangian time series, where freely-drifting instruments (or drifters) repeatedly report their position to passing satellites. In this talk we first demonstrate that appropriate stationary models can accurately describe short intervals of the data. Over longer periods however, drifters visit regions with different spatial characteristics, which translates to time series that are nonstationary. We demonstrate how to account for this nonstationarity semi-parametrically, where we allow underlying parameters of the stochastic models to vary in time and be estimated using rolling windows. We also employ semi-parametric techniques to account for sampling issues and model misspecification. The time-varying parameter estimates can then be interpreted spatially, by aggregating output from drifters that visit similar locations. We demonstrate the effectiveness of our approach with data from the Global Drifter Programme, where re gional (as well as global) effects can be efficiently extracted using our simple statistical modelling techniques.
Friday 17th January 2014
09:30 to 10:15 The group fused Lasso for multiple change-point detection
We present the group fused Lasso for detection of multiple change-points shared by a set of co-occurring one-dimensional signals. Change-points are detected by approximating the original signals with a constraint on the multidimensional total variation, leading to piecewise-constant approximations. Fast algorithms are proposed to solve the resulting optimization problems, either exactly or approximately. Conditions are given for consistency of both algorithms as the number of signals increases, and empirical evidence is provided to support the results on simulated and array comparative genomic hybridization data.
10:15 to 11:00 Quickest Changepoint Detection: Optimality Properties of the Shiryaev-Roberts-Type Procedures
We consider a changepoint problem of detecting an abrupt change in a process in a sequential setting when one wants to design a detection procedure that minimizes the average delay to detection of a change subject to constraints on the false alarm rate. A brief overview of the field as well as recent advances will be presented with the focus on a simple scenario where both pre- and post-change distributions are known. Optimality properties of the conventional Shiryaev-Roberts detection procedure and its modifications will be discussed.
11:00 to 11:30 Morning Coffee
11:30 to 12:15 D Stoffer (University of Pittsburgh)
Adaptive Spectral Estimation for Nonstationary Time Series
We propose a method for analyzing possibly nonstationary time series by adaptively dividing the time series into an unknown but finite number of segments and estimating the corresponding local spectra by smoothing splines. The model is formulated in a Bayesian framework, and the estimation relies on reversible jump Markov chain Monte Carlo (RJMCMC) methods. For a given segmentation of the time series, the likelihood function is approximated via a product of local Whittle likelihoods. The number and lengths of the segments are assumed unknown and may change from one MCMC iteration to another.
12:15 to 13:30 Lunch at Wolfson Court
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons