Skip to content

DOE

Seminar

Multiplicative Algorithms: A class of algorithmic methods used in optimal experimental design

Torsney, B (Glasgow)
Friday 15 August 2008, 16:00-16:30

Seminar Room 1, Newton Institute

Abstract

Multiplicative algorithms have been considered by several authors. Thus Titterington (1976) proved monotonicity for D-optimality for a specific choice. This latter choice is also monotonic for finding the maximum likelihood estimators of the mixing weights, given data from a mixture of distributions. Indeed it is an EM algorithm; see Torsney (1977). Torsney (1983) proved monotonicity for A-optimality. In fact this extended a result of Fellman (1974) for c-optimality, but he was not focussing on algorithms. Both choices also appear to be monotonic in determining respectively c-optimal and D-optimal conditional designs, i.e. in determining several optimising distributions; see Martin-Martin, Torsney and Fidalgo (2007). Other choices are needed if the criterion function can have negative derivatives, as in some maximum likelihood estimation problems, or if partial derivatives are replaced by vertex directional derivatives. See Torsney (1988) Torsney and Alahmadi (1992) and Torsney and Mandal (2004, 2006). We study a new approach to determining optimal designs, exact or approximate, both for correlated responses and for the uncorrelated case. A simple version of this method, in the case of one design variable (x), is based on transforming a conceived set of design points {xi} on a finite interval to the proportions of the design interval, defined by the sub-intervals between successive points. Methods for determining optimal (design) weights can therefore be used to determine optimal values of these proportions. We explore the potential of this method in a variety of examples encompassing both linear and nonlinear models (some assuming a correlation structure), and a range of criteria including D-Optimality, L-Optimality, C-Optimality. It is also planned to extend this work as follows: 1. An extension is to first transform x to F(x), where F(.) is a distribution function, and then to transform a set of design points to the proportions naturally defined by the differences in the F(.) values of successive design points. This has the advantage of accommodating unbounded design intervals, as occurs in non-linear models, and is a natural choice in binary regression models. 2. A major problem in optimum experimental design theory is concerned with discrimination between several plausible models. We "believe" that using this approach we can obtain T-Optimum designs under some differentiability conditions. 3. We also consider examples with more than one design variable. In this case we transform the design problem to one of optimizing with respect to several distributions.

Presentation

[pdf ]

Audio

MP3MP3

Video

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.

Back to top ∧