Rates of convergence and related issues in nonparametrics Bayesian inference
Seminar Room 2, Newton Institute Gatehouse
We consider the problem of finding the rate of convergence of posterior distributions in nonparametric analysis. We argue that the size of the model measured by the metric entropy and the rate of concentration of the prior drive the rate. We discuss several examples including priors based on bracketing, splines and Dirichlet mixtures. The convergence rate is shown to depend on the level of smoothness. We discuss an effective strategy to automatically adapt to the unknown level of smoothness and argue that the posterior assigns high probability to the correct model. Finally we cite examples when the frequentist and Bayesian notions of variability agree and when they do not.