skip to content
 

A comparison of three Bayesian approaches for constructing model robust designs

Date: 
Friday 2nd September 2011 - 15:00 to 15:30
Venue: 
INI Seminar Room 1
Session Title: 
Screening and Model Uncertainty
Session Chair: 
Sue Lewis
Abstract: 
While optimal designs are commonly used in the design of experiments, the optimality of those designs frequently depends on the form of an assumed model. Several useful criteria have been proposed to reduce such dependence, and efficient designs have been then constructed based on the criteria, often algorithmically. In the model robust design paradigm, a space of possible models is specified and designs are sought that are efficient for all models in the space. The Bayesian criterion given by DuMouchel and Jones (1994), posits a single model that contains both primary and potential terms. In this article we propose a new Bayesian model robustness criterion that combines aspects of both of these approaches. We then evaluate the efficacy of these three alternatives empirically. We conclude that the model robust criteria generally lead to improved robustness; however, the increased robustness can come at a significant cost in terms of computing requirements.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
Presentation Material: 
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons