Improvements to variational Bayesian inference
Seminar Room 2, Newton Institute Gatehouse
Variational Bayesian (VB) inference is an approximate inference framework that has been successfully applied in a wide variety of graphical models. It is well accepted that VB provides lowered variance in posterior estimation in exchange for higher bias, as opposed to Markov chain Monte Carlo (MCMC) inference. In this talk we shall explore improvements to the VB framework in order to reduce bias, in the context of a specific Bayesian network called latent Dirichlet allocation. Specifically we consider two ideas: collapsing or integrating out variables before any approximations are made, and hybrid methods that combine VB and MCMC techniques.