skip to content
 

Measuring Sample Discrepancy with Diffusions

Presented by: 
Andrew Duncan University of Sussex, The Alan Turing Institute
Date: 
Tuesday 18th July 2017 - 15:40 to 16:20
Venue: 
INI Seminar Room 1
Abstract: 
In many applications one often wishes to quantify the discrepancy between a sample and a target probability distribution.  This has become particularly relevant for Markov Chain Monte Carlo methods, where practitioners are now turning to biased methods which trade off asymptotic exactness for computational speed.  While a reduction in variance due to more rapid sampling can outweigh the bias introduced, the inexactness creates new challenges for parameter selection.  The natural metric in which to quantify this discrepancy is the Wasserstein or Kantorovich metric.  However, the computational difficulties in computing this quantity has typically dissuaded practitioners.    To address this, we introduce a new computable quality measure based on Stein's method that quantifies the maximum discrepancy between sample and target expectations over a large class of test functions.  We demonstrate this tool by comparing exact, biased, and deterministic sample sequences and illustrate applications to hyperparameter selection, convergence rate assessment, and quantifying bias-variance tradeoffs in posterior inference.



The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons