skip to content
 

Validating approximate Bayesian computation on posterior convergence

Presented by: 
Wentao Li Lancaster University
Date: 
Wednesday 5th July 2017 - 09:45 to 10:30
Venue: 
INI Seminar Room 1
Abstract: 
Co-author: Paul Fearnhead (Lancaster University)

Many statistical applications involve models for which it is difficult to evaluate the likelihood, but relatively easy to sample from. Approximate Bayesian computation is a likelihood-free method for implementing Bayesian inference in such cases. We present a number of surprisingly strong asymptotic results for the regression-adjusted version of approximate Bayesian Computation introduced by Beaumont et al. (2002). We show that for an appropriate choice of the bandwidth in approximate Bayesian computation, using regression-adjustment will lead to a posterior that, asymptotically, correctly quantifies uncertainty. Furthermore, for such a choice of bandwidth we can implement an importance sampling algorithm to sample from the posterior whose acceptance probability tends to 1 as we increase the data sample size. This compares favourably to results for standard approximate Bayesian computation, where the only way to obtain its posterior that correctly quantifies uncertainty is to choose a much smaller bandwidth, for which the acceptance probability tends to 0 and hence for which Monte Carlo error will dominate.

Related Links
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons