skip to content

Timetable (DOEW02)

Design of Experiments: Recent Advances in Methods and Applications (DEMA2008)

Monday 11th August 2008 to Friday 15th August 2008

Monday 11th August 2008
08:30 to 10:00 Registration
10:00 to 11:00 D Cox (University of Oxford)
Randomization was one of four key elements in R.A.Fisher's discussion of experimental design. There have always been controversial aspects to it; Student seems never to have accepted its desirability and some personalistic Bayesians have argued that it is at best unnecessary and typically harmful. After some brief historical remarks and a review of the purposes of randomization in different contexts some open issues, including the relation with conditional inference, are discussed.
11:00 to 11:10 Poster storm I
11:10 to 11:30 Coffee
11:30 to 12:30 Poster session
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:30 S Gilmour ([QMUL])
Multi-stratum response surface designs
Many industrial and laboratory-based experiments involve studying some factors which are hard, time-consuming or expensive to change and other factors which can be changed more easily. It is now well-recognised that varying some factors more quickly than others leads to split-plot or other multi-stratum structures. When the aims of the experiment lead to a desire to fit second-order polynomial response surface models, it is usually impossible to use a standard orthogonal design. Instead, an algorithm is usually used to search for a design which is optimal in some way. The different algorithms which have been suggested will be reviewed and the assumptions underlying their definitions of optimality will be reconsidered. The appropriateness of different criteria depend crucially on the objectives of the experiment and inappropriate scaling of the criteria can lead to misleading results. The differences and similarities between stratum-by-stratum and all-in-one methods of construction will be clarified. A Bayesian perspective will be used to show that, unless prior knowledge of the sizes of random effects is very certain or effects estimated in the higher strata are not of interest, experimenters should use either stratum-by-stratum methods or all-in-one methods with very large prior estimates of the higher stratum variance components.
14:30 to 15:00 C-S Cheng & PW Tsai ([UC Berkeley/National Taiwan Normal Univ])
An approach to the selection of multistratum fractional factorial designs
We propose an approach to the selection of multistratum fractional factorial designs. Our criterion, derived as a good surrogate for the model-robustness criterion of information capacity, takes the stratum variances into account. Comparisons with minimum-aberration type criteria proposed in some recent works will be presented.
15:00 to 15:30 CA Vivacqua ([Federal do Rio Grande do Norte])
Post-fractionated strip-block designs with applications to robust design and multistage processes
Novel arrangements for strip-block designs are presented aimed at reducing the experimental effort. The goal is to provide theoretical properties of these new layouts employing post-fractionated strip-block designs. It is also shown how to perform the appropriate data analysis. An experiment on an industrial battery production process is used as an illustration. As a tool to aid the selection of appropriate plans, catalogs of post-fractionated strip-block designs with 16 and 32 trials are provided.
15:30 to 16:00 Tea
16:00 to 16:30 R Schwabe ([Magdeburg])
Some considerations on optimal design for non-linear mixed models
In data analysis for life sciences mixed models, which involve both fixed and random effects, play an important role. Moreover, in this context many functional relationships are non-linear. These two features result in a parameter dependence of the (asymptotic) information matrix, which, as the inverse of the asymptotic covariance matrix for the maximum likelihood estimator, is meant to measure the performance of the underlying design of the study or experiment at hand. Consequently, the design optimization may and, in most cases, will be influenced by some of the model parameters. Recently, there has been some dispute on the most adequate form of the asymptotic information matrix obtained by linearization. In the present talk we will try to resolve this controversy and discuss the adequacy of some of the most popular design criteria. The ideas will be illustrated by some basic examples.
16:30 to 17:00 D Woods ([Southampton])
Experiments in blocks for a non-normal response
Many industrial experiments measure a response that cannot be adequately described by a linear model with normally distributed errors. An example is an experiment in aeronautics to investigate the cracking of bearing coatings where a binary response was observed, success (no cracking) or failure (cracked). A further complication which often occurs in practice is the need to run the experiment in blocks, for example, to account for different operators or batches of experimental units. To produce more efficient experiments, block effects are often included in the model for the response. When the block effects can be considered as nuisance variables, a marginal (or population averaged) model may be appropriate, where the effect of individual blocks are not explicitly modelled. We discuss block designs for experiments where the response is described by a marginal model fitted using Generalised Estimating Equations (GEEs). GEEs are an extension of Generalised Linear Models (GLMs) that incorporate a correlation structure between experiment units in the same block; the marginal response for each observation follows an appropriate GLM. This talk will describe some design strategies for such models in an industrial context.
17:00 to 17:30 F Tekle ([Maastricht University])
Maximin D-optimal designs for binary longitudinal responses
Optimal design problems for logistic mixed effects models for binary longitudinal responses are considered. A function of the approximate information matrix under the framework of the Penalized Quasi Likelihood (PQL) and a generalized linear mixed model with autocorrelation is optimized. Locally D-optimal designs are computed. Maximin D-optimal designs are considered to overcome the problem of parameter value dependency of the D-optimal designs. The results show that the optimal number of repeated measurements depends on the number of regression parameters in the model. The performance of the maximin D-optimal designs in terms of the maximin efficiency (MME) is high for a range of parameter values that is common in practice. The design locations for mixed-effects logistic models generally shift to the left as compared to the design locations for general linear mixed- effects models known in the literature.
17:30 to 18:30 Welcome Wine Reception
18:45 to 19:30 Dinner at Wolfson Court (Residents only)
Tuesday 12th August 2008
09:30 to 10:00 ED Schoen ([Antwerp and Delft])
A blocking strategy for orthogonal arrays of strength 2
Orthogonal arrays (OAs) of strength 2 permit independent estimation of main effects. An orthogonally blocked OA could be considered as an OA with one additional factor. Such an OA is in fact a two-stratum design. The main effects of the treatment factors are estimated in the bottom stratum. The upper stratum may contain interaction components for these factors. The blocking factor supposedly has no interactions with treatment factors. I propose to search for suitable blocking arrangements by studying projections of arrays into those with one factor less. I illustrate with a complete catalogue of blocked pure or mixed 18-run arrays.
10:00 to 10:30 J Kunert ([Dortmund])
A-optimal block designs for the comparison of treatments with a control with autocorrelated errors
There is an extensive literature on optimal and efficient designs for comparing /t/ test (or new) treatments with a control (or standard treatment) - see Majumdar (1996). However, almost all results assume the observations are uncorrelated. In many situations, it is more realistic to assume that observations in the same block are positively correlated, and there has been much interest in this case when all contrasts are of equal interest - see, for example, Martin (1996).Assuming that the estimation uses ordinary least-squares, Bhaumik (1990) found optimal within-block orderings under a first-order nearest-neighbour model NN(1) among some designs that would have been optimal test-control designs under independence. Cutler (1993) obtained some optimality results under a first-order autoregressive process AR(1) on the circle or the line, assuming generalised least-squares estimation for a known dependence. There are also some brief examples and discussion of the correlated case in Martin & Eccleston (1993, 2001). Here, we concentrate on generalised least-squares estimation for a known covariance. Results for independence, and Cutler's (1993) results for the AR(1), are for specific combinations of /t/, /b/, /k/, and use integer minimisation to ensure an optimal design exists. Here, we assume that the number of blocks /b/ is large enough for an optimal design to exist, and consider the form of that optimal design. This method may lead to exact optimal designs for some /b/, /t/, /k/, but usually will only indicate the structure of an efficient design for any particular /b/, /t/, /k/, and yield an efficiency bound, usually unattainable. The bound and the structure can then be used to investigate efficient finite designs.
10:30 to 11:00 L Trinca (UNESP)
Efficient blocked designs allowing for pure error estimation
In this talk we re-consider the problem of designing response surface experiments on process with high variation. We propose some alternative criteria for efficiently blocking a fixed treatment set that have the flexibility of allowing for pure error estimation. Such criteria are compound design criteria that focus on jointly parameter estimation through confidence region or individual parameter estimation through confidence interval. Several examples will illustrate the idea.
11:00 to 11:30 Coffee
11:30 to 12:00 K Baggerly ([Texas])
Proteomics, ovarian cancer, and experimental design
Just as microarrays allow us to measure the relative RNA expression levels of thousands of genes at once, mass spectrometry profiles can provide quick summaries of the expression levels of hundreds of proteins. Using spectra derived from easily available biological samples such as serum, we hope to identify proteins linked with a difference of interest such as the presence or absence of cancer. With respect to ovarian cancer, this approach has been claimed to provide diagnostic tests with near perfect sensitivity and specificity. Based on the strength of these results, a home-brew test known as OvaCheck was advertised for public consumption. Unsurprisingly, such tests are of great interest at MD Anderson, and we have explored proteomic patterns in depth. In this talk, we will briefly introduce the mechanics underlying the mass spectrometry variant known as matrix-assisted laser desorption and ionization/time of flight (MALDI-TOF), and the special case known as surface-enhanced laser desorption and ionization/time of flight (SELDI-TOF). We then take a pictorial tour through some of the raw data, looking for interesting structure both in a single experiment and over multiple experiments. However, what the data most clearly shows is not biological structure, but rather the need for careful experimental design, data cleaning, and data preprocessing to ensure that the structure found is not due to systematic bias.
12:00 to 12:30 T Speed ([UC, Berkeley])
Experiments assessing the effects of preanalytical variables on molecular research
When the abundance of mRNA, proteins or metabolites in cell samples is measured using a genomic, proteomic or metabolomic assay, it may happen that the measurement is more influenced by uncontrolled preanalytical variables than by the measurement process itself. For example, if the cells are from a tissue sample taken during surgery, variables such as drugs, type or duration of anesthesia, and arterial clamp time can greatly affect the final molecular measurements, as can a host of post-acquisition variables such as time at room temperature, temperature of the room prior to fixing, type of fixative, time in fixative, rate of freezing, and so on. Lack of awareness of these possible effects can lead to incorrect diagnosis, incorrect treatment, and irreproducible results in research. How do we determine which of these variables matter for a given assay, and how do we derive standard procedures for sample acquisition, handling, processing and storage, prior to the assay? The answer is, of course, through experimentation. We will need to combine screening experiments, as the number of potentially important variables is large, with later experiments to determine robust combinations of factors which might become new standard operating procedures. The experiments must be on human tissue, we'd like replicates, and we'd like to be able to distinguish intra-person and inter-person variability. There are significant practical and ethical constraints surrounding such experiments. Nevertheless, the US National Cancer Institute's Office of Biorepositories and Biospecimen Research is committed to carrying out such experiments, to address the problems mentioned above. In this talk I will discuss some of the design challenges they are meeting, illustrating my discussion with an example concerning blood drawing.
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:30 C Brien ([South Australia])
Tiers in gene expression microarray experiments
At least some gene expression microarray experiments are two-phased and multitiered. We compare those that are with those that are not. In particular, microarray experiments conducted on material derived from a prior experiment are, constituting a subset of experiments with a second phase in the laboratory. Recent papers have formulated the analysis of some two-phase microarray experiments using lengthy, ad hoc methods. A general method, based on tiers, will be described for synthesizing mixed models and analyses and the results compared to those already published. In this, it will be demonstrated that pseudofactors can be used to ensure that only real sources of variation are retained in the analysis. Also discussed will be how, as for two-phase experiments in general, the properties of the first-phase design shape those of the whole experiment.
14:30 to 15:00 K Kerr ([Washington])
Experimental design issues for gene expression microarrays
The reference design is a practical and popular choice for microarray studies using two-color platforms. A "reference" RNA is the linchpin of the design, so an important question is what to use as the reference RNA. I will propose a novel method for evaluating reference RNAs and present the results of an experiment that was designed to evaluate three common choices of reference RNA. I will also discuss advantages of reference designs, and issues on the interpretation of the microarray signal.
15:00 to 15:30 R Mukerjee ([Indian Institute of Management, Calcutta])
Factorial designs for cDNA microarray experiments: results and some open issues
We consider factorial designs for cDNA microarray experiments under a baseline parametrization where the objects of interest differ from those under the more common orthogonal parametrization. Complete factorials are discussed first and some optimality results are given, including those pertaining to the saturated and nearly saturated cases. The case of models with dye-coloring effects is also covered. The technical tools include approximate theory and use of unimodular matrices. The more complex issue of fractional replication is then taken up and several open problems are indicated.
15:30 to 16:00 Tea
16:00 to 16:30 M Latif & F ([QMUL])
Selection of good two-color microarray designs using genetic algorithms
Identifying differentially expressed genes is one of the main goals of microarray experiments. The use of an efficient design in microarray experiment can improve the power of the inferential procedure. Besides efficiency, robustness issues should also be considered in selecting good microarray designs because missing values often occur in microarray experiments. For a given number of available arrays and number of treatment conditions, different microarray designs can be considered. The number of possible designs could be very large and thus a complete search may not be computationally feasible. We propose a Genetic Algorithm based search procedure which considers both the efficiency and robustness criteria in selecting good microarray designs.
16:30 to 17:00 V Lima Passos ([Maastricht])
Optimal designs for one and two-colour microarrays using mixed models: a comparative evaluation of their efficiencies
Comparative studies between the one and two-colour microarrays provide supportive evidence for similarities of results on differential gene expression. So far, no design comparisons between the two platforms have been undertaken. With the objective of comparing optimal designs of one- and two-colour microarrays in their statistical efficiencies, techniques of design optimisation were applied within a mixed model framework. A- and D- optimal designs for the one- and two-colour platforms were sought for a 3 x 3 factorial experiment. The results suggest that the choice of the platform will not affect the subjects to groups allocation, being concordant in the two designs. However, under financial constraints, the two-colour arrays are expected to have a slight upper hand in terms of efficiency of model parameters estimates, once the price of arrays is more expensive than that of subjects. This statement is especially valid for microarray studies envisaging class comparisons.
17:00 to 17:30 H Grossmann ([QMUL])
The relationship between optimal designs for microarray and paired comparison experiments
Commonly used models for the logarithm of the intensity ratio in two-color microarray experiments are equivalent to linear paired comparison models. By using this relationship it is demonstrated how optimal designs for microarray experiments involving factorial treatments can be adapted from multi-factor paired comparison experiments (Graßhoff et al., 2003, 2004). We consider models where the influence of the treatment factors is described by main effects only as well as models involving all first-order interactions.
18:45 to 19:30 Dinner at Wolfson Court (Residents only)
Wednesday 13th August 2008
09:30 to 10:00 M Vandebroek ([Leuven])
Conjoint choice experiments for estimating efficiently willingness-to-pay
In a stated preference or conjoint experiment respondents evaluate a number of products that are defined by their underlying characteristics. The resulting data yield information on the importance that respondents attach to the different characteristics, also called the part-worths. In a conjoint choice experiment, respondents indicate which alternative they prefer from each choice set presented to them. The design of a conjoint choice experiment consists of choosing the appropriate alternatives and of grouping the alternatives in choice sets such that the information gathered about the part-worths is maximized. In this talk special attention will be given to the problem of assessing accurately the marginal rate of substitution by a conjoint choice experiment. The marginal rate of substitution measures the consumer's willingness to give up an attribute of a good in exchange for another attribute. As this rate of substitution is computed by taking the ratio of two part-worths, specific design problems are involved.
10:00 to 10:30 B Jones (JMP)
Practical Bayesian optimal design for discrete choice experiments
The use of the world wide web for marketing has exploded recently. The web makes contacting customers and learning their preferences for potential products fast and affordable. Discrete choice experiments are a powerful tool for establishing the relative importance that the marketplace will put on the features of a new product. Since the underlying model for such experiments is nonlinear, it is necessary to provide some information about the model parameters in order to generate an efficient design. Bayesian methods provide just the formalism that is needed here. Unfortunately, the generation of optimal designs using Bayesian methods has been difficult because of the intensive computing required. This presentation describes some recent advances that make the computation of these designs feasible for practical web use. A case study will be presented for finding the preferences of users of statistical software for elements of the display diagnostic graphs.
10:30 to 11:00 A Dean ([Ohio State])
Studying the level-effect in conjoint analysis: An application of efficient experimental designs for hyperparameter estimation
Research in marketing, and business in general, involves understanding when effect-sizes are expected to be large and when they are expected to be small. An example is the level-effect in marketing, where the effect of product attributes on utility is positively related to the number of levels present among choice alternatives. Knowing the contexts in which consumers are sensitive to the levels of attributes is an important aspect of merchandising, selling and promotion. In this paper, we propose efficient methods of learning about contextual factors that influence consumer preference and sensitivities within the context of a hierarchical Bayes model. A design criterion is developed for hierarchical linear models, and validated in a study of the "level-effect"' in conjoint analysis using a national sample of respondents. Extensions to other model structures are discussed. This is joint work with Qing Liu, Greg Allenby and David Bakken.
11:00 to 11:30 Coffee
11:30 to 12:30 RA Bailey ([QMUL])
Design of two-phase experiments
In a two-phase experiment, treatments are allocated to experimental units in the first phase, and the products from those experimental units are allocated to a second sort of experimental unit in the second phase. The appropriate data analysis (and therefore the quality of the overall design) depends on the designs used for the two phases and on how they fit together. Usually we want to estimate the most important contrasts with low variance and with a large number of degrees of freedom for the appropriate residual. In a two-phase experiment, these criteria may conflict. I will discuss some of the issues to think about when designing such experiments, and show how sometimes Patterson's design key can help.
12:30 to 13:30 Lunch at Wolfson Court
19:15 to 23:00 Pre dinner drinks and Conference Dinner - St John's College
Thursday 14th August 2008
09:30 to 10:00 K Roth (Bayer Schering Pharma AG)
Adaptive designs for dose escalation studies - a simulation study
Dose escalation studies are used to find the maximum tolerated dose of a new drug. They are among the first studies where the new drug is used in humans, therefore little prior knowledge about the tolerability of the drug is available. Additionally, ethical restrictions have to be considered. To account for this, adaptive approaches are adequate. Most of the current standard methods like the 3+3-design are not based on optimal design theory suggesting that there is room for improvement. In a simulation study using four different dose-response-scenarios, three adaptive approaches to find the maximum tolerated dose (MTD) are compared. The traditional 3+3-design is compared to a Bayesian approach using the software tool "Bayesian ADEPT". The third approach is a parametric modification of the 3+3-design, where the 3+3-design is conducted until enough information is gathered to construct locally optimal designs based on a logistic model. It is shown that the Bayesian approach performs best in determining the correct MTD, but at the cost of treating a lot of patients at toxic doses, which makes it less feasible for practical use. The 3+3-design is more conservative, tending to underestimate the MTD but treating only few patients at toxic doses. The parametric modification of the 3+3-design has higher chances of finding the correct dose while increasing the risk for the treated patients only very slightly, and therefore is a promising alternative to the traditional 3+3-design.
10:00 to 10:30 B Bogacka ([QMUL])
Adaptive optimum experimental design in Phase I clinical trials
The maximum tolerable dose in Phase I clinical trials may not only carry too much unnecessary risk for patients but may also not be the most efficacious level. This may occur when the efficacy of the drug is unimodal rather than increasing, while the toxicity will be an increasing function of the dose. It may be more beneficial to design a trial so that doses around the so-called Biologically Optimum Dose (BOD) are used more than other dose levels. Zhang at al (2006) presented simulation results for an adaptive design for a variety of models when the response is trinomial ("no response", "success" and "toxicity"). The choice of dose for the next cohort depends on the information gathered from previous cohorts, which provides an updated estimate of BOD for the next experiment. However, this reasonable approach is confined to a sparse grid of dose levels which may be far from the "true"' BOD. In our work we explore the scenarios used by Zhang but search for the BOD over a continuous dose interval. This increases the percentage of patients treated with a good approximation to the "true" BOD. However, more patients may be treated at a high toxicity probability level and so some further restrictions are introduced to increase the safety of the trial. We give examples of the properties of various design strategies and suggest future developments.
10:30 to 11:00 SGM Biedermann ([Southampton])
Designing experiments for an application in laser and surface chemistry
Second harmonic generation (SHG) experiments are widely used in Chemistry to investigate the behaviour of interfaces between two phases. We discuss issues arising in planning SHG experiments at the air/liquid interface in order to obtain maximal precision in the subsequent data analysis. An interesting feature of such models is that the unknown model parameters are complex. We provide designs that are optimal for estimating these parameters and discuss robustness issues arising from the non-linearity of the model.
11:00 to 11:30 Coffee
11:30 to 12:00 P Mueller ([Texas at Houston])
Randomized discontinuation design
Random discontinuation designs (RDD) proceed in two stages. During the first stage all patients are treated with the experimental therapy. A subgroup of patients who show evidence of response during the first stage are then randomized to control and treatment in a second stage. The intention of the design is to identify in the first stage a subpopulation of patients who could potentially benefit from the treatment, and carry out the comparison in the second stage only in that identified subgroup. Most applications are to oncology phase II trials for cytostatic agents. The design is characterized by several tuning parameters: the duration of the preliminary first stage, the number of patients in the trial, and the selection criterium for the second stage. We discuss an optimal choice of the tuning parameters based on a Bayesian decision theoretic framework. We define a probability model for putative cytostatic agents and specify a suitable utility function. A computational procedure to select the optimal decision is illustrated and the efficacy of the proposed approach is evaluated through a simulation study.
12:00 to 12:30 AC Atkinson (London School of Economics)
Adaptive designs for clinical trials with prognostic factors that maximize utility
The talk concerns a typical problem in Phase III clinical trials, that is when the number of patients is large. Patients arrive sequentially and are to be allocated to one of $t$ treatments. When the observations all have the same variance an efficient design will be balanced over treatments and over the prognostic factors with which the patients present. However, there should be some randomization in the design, which will lead to slight imbalances. Furthermore, when the responses of earlier patients are already available, there is the ethical concern of allocating more patients to the better treatments, which leads to further imbalance and to some loss of statistical efficiency. The talk will describe the use of the methods of optimum experimental design to combine balance across prognostic factors with a controllable amount of randomization. Use of a utility function provides a specified skewing of the allocation towards better treatments that depends on the ordering of the treatments. The only parameters of the design are the asymptotic proportions of patients to be allocated to the ordered treatments and the extent of randomization. The design is a sophisticated version of those for binary responses that force a prefixed allocation. Comparisons will be made with other rules that employ link functions, where the target proportions depend on the differences between treatments, rather than just on their ranking. If time permits, the extension to binary and survival-time models will be indicated. Mention will be made of the importance of regularization in avoiding trials giving extreme allocations. A simulation study fails to detect the effect of the adaptive design on inference. (Joint work with Atanu Biswas, Kolkata)
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:30 PF Thall ([Texas])
Two-stage treatment strategies based on sequential failure times
For many diseases, therapy involves multiple stages, with treatment in each stage chosen adaptively based on the patient's current disease status and history of previous treatments and outcomes. Physicians routinely use such multi-stage treatment strategies, also called dynamic treatment regimes or treatment policies. In this talk, I will present a Bayesian framework for a clinical trial comparing several two-stage strategies based on the time to overall failure, defined as either second disease worsening or discontinuation of therapy. The design was motivated by a clinical trial, which is currently ongoing, comparing six two-stage strategies for treating advanced kidney cancer. Each patient is randomized among a set of treatments at enrollment, and if disease worsening occurs the patient is then re-randomized among a set of treatments excluding the treatment received initially. The goal is to select the two-stage strategy giving the largest mean overall failure time. A parametric model is formulated to account for non-constant failure time hazards, regression of the second failure time on the patient's first worsening time, and the complications that the failure time in either stage may be interval censored and there may be a delay between first and second stage of therapy. A simulation study in the context of the kidney cancer trial is presented.
14:30 to 15:00 A Giovagnoli ([Bologna])
Inference and ethics in clinical trials for comparing two treatments in the presence of covariates
In the medical profession physicians are expected to act in the best interests of each patient under their care, and this attitude is also reflected in special ethical considerations surrounding clinical trials. When the trial is adaptive, possible choices are either to try to do what appears to be best at each step for that particular patient, or else to aim at an overall benefit for the entire sample of patients involved in the trial. When the scope of the trial is comparing the probabilities of success of two treatments, a typical example of the former approach is the Play-the-Winner rule, while examples of the latter are trying to minimize the total number of patients assigned to the inferior treatment, or to maximize the number of expected "successes". Clearly the need for ethics and the need for experimental evidence are often conflicting demands, so a compromise is called for. But what weight should be assigned to ethics and what to the inferential criterion? In general, it is reasonable to suppose that the more significantly different the two success probabilities are, the more important an ethical allocation will be. In this presentation we propose a compromise criterion such that the weight of ethics is an increasing function of the absolute difference between the success probabilities. We suggest an adaptive allocation method based on sequential estimation of the unknown target by maximum likelihood, and show that this particular Sequential Maximum Likelihood Design converges to a treatment allocation that optimizes the compromise criterion. The approach is extended to account for the presence of random normal covariates, allowing for treatment-covariate interactions, which makes the need for an ethical allocation even more stringent. A proof of the convergence will be given for this case too. Our design is compared to some existing ones in the literature.
15:00 to 15:30 T Friede ([Warwick])
Flexible designs for late phase clinical trials
So-called adaptive or flexible designs are recognised as one way of making clinical development of new treatments more efficient and more robust against misspecifications of parameters in the planning phase. In this presentation we give a brief introduction to flexible designs for clinical trials and then focus on two specific adaptations, namely sample size reestimation and treatment selection. Issues in the implementation of such designs will be discussed.
15:30 to 16:00 Tea
16:00 to 16:30 S Leonov (GlaxoSmithKline)
An adaptive optimal design for the Emax model and its application in clinical trials
We discuss an adaptive design for a first-time-in-human dose-escalation study in patients. A project team working on a compound wished to maximize the efficiency of the study by using doses targeted at maximizing information about the dose-response relationship within certain safety constraints. We have developed an adaptive optimal design tool to recommend doses when the response follows an Emax model, with functionality for pre-trial simulation and in-stream analysis. We describe the methodology based on model-based optimal design techniques and present the results of simulations to investigate the operating characteristics of the applied algorithm.
16:30 to 17:00 J Godolphin ([Surrey])
Selection of cross-over designs in restricted circumstances
Cross-over designs are used extensively in clinical trials and many other fields. Restrictions in the availability of subjects may result in a set of parameters for which there is no known optimal design. This situation arises, for example, if the subjects comprise patients with a rare medical condition. A new class of cyclic cross-over designs is proposed. Designs in the class are shown to have lower average variances for direct and carry-over pairwise treatment contrasts than cross-over designs previously described in the literature. Consideration is also given to guarding against choosing a design that can become disconnected (and therefore unusable) if a few observations are lost during the period of experimentation. The techniques are illustrated by selection of a design for a clinical trial with specific numbers of treatments and subjects.
17:00 to 17:30 LM Moore (Los Alamos National Laboratory)
Design and analysis of experiments applied to critical infrastructure simulation
Critical infrastructures are a complex "system of systems" and interdependent infrastructure simulation models are useful to assess consequences of disruptions initiated in any infrastructure. A risk-informed decision support tool using systems dynamics methods has been developed at Los Alamos National Laboratory to provide an efficient running simulation tool to gain insight for making critical infrastructure protection related decisions in the presence of uncertainty. Modeling of consequences of an infectious disease outbreak provides a case study and opportunity to demonstrate exploratory statistical experiment planning and analysis capability. In addition to modeling consequences of an incident, alternative mitigation strategies can be implemented and consequences under these alternatives compared. Statistical analyses include screening, sensitivity and uncertainty analysis in addition to designing experiments which are sets of simulation runs for comparing relative consequences from implementation of different mitigation strategies.
18:45 to 19:30 Dinner at Wolfson Court (Residents only)
Friday 15th August 2008
09:45 to 10:00 Poster storm II INI 1
10:00 to 11:00 Poster session
11:00 to 11:30 Coffee
11:30 to 12:00 H Maruri-Aguilar & H Wynn (London School of Economics)
Optimal design for special kernel computer experiments
There are not many exact results for the optimality of experimental designs in the context of space-filling design for computer experiments. In addition there is some disparity between optimal designs for classical regression models, such as D-optimum designs, and designs for Gaussian process models, such as maximum entropy sampling (MES). In both cases one can talk about "kernels". In the first we can think of the regression models as given by kernels. In the second we have a "covariance kernel". There is a link provided by the Karhunen-Loeve expansion of the covariance function. These issues are covered but most time is spent on important design-kernel pairs where there are hard optimality results and design solutions are space-filling in nature. The two main examples covered are multidimensional Fourier models where lattice design are D-optimal and some recent work on Haar wavelets where Sobol sequences are D-optimal.
12:00 to 12:30 P Challenor ([National Oceanography Centre])
Can we design for smoothing parameters?
When we analyse computer experiments we usually use an emulator (or surrogate). Emulators are based on Gaussian processes with parameters estimated from a designed experiments. Most effort in the design of computer experiments has concentrated on the idea of 'space filling' design, such as the Latin hypercube. However an important parameter in the emulator is its smoothness. Intuition suggests that adding some points closer together should improve our estimates of smoothness over the standard space filling designs. Using some ideas from geostatistics we investigate whether we can improve our designs in this way.
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:30 P Qian ([Wisconsin-Madison])
Nested space-filling designs
Computer experiments with different levels of accuracy have become prevalent in many engineering and scientific applications. Design construction for such computer experiments is a new issue because the existing methods deal almost exclusively with computer experiments with one level of accuracy. In this talk, I will discuss the construction of some nested space-filling designs for computer experiments with different levels of accuracy. Our construction makes use of Galois fields and orthogonal arrays. As a related topic, I will also discuss the construction of suitable space-filling designs for computer experiments with qualitative and quantitative factors. This is joint work with Boxin Tang at Simon Fraser University and C. F. Jeff Wu at Georgia Tech.
14:30 to 15:00 A Kumar (Ohio State University)
Sequential calibration of computer models
We propose a sequential method for the estimation of calibration parameters for computer models. The goal is to find the values of the calibration parameters that bring a computer simulation into ``best'' agreement with data from a physical experiment. In this method, we first fit separate Gaussian Stochastic Process(GASP) models to given data from a physical and a computer experiment. The values of the calibration parameters that minimize the discrepancy between predictions from the two models, are taken as the estimates. In the second step, the point with maximum potential for reducing the uncertainty in the fitted model is identified. The Computer experiment is conducted at this new point. The first step is repeated with the augmented data set, the calibration parameters re-estimated, and the next design point determined. The method is repeated until the allocated budget for the number of design points are exhausted or the calibration parameters' estimates are satisfactory. Empirical results shows effectiveness of the sequential procedure in achieving faster convergence to the estimates of the calibration parameters when a unique best estimate exists.
15:00 to 15:30 D Romano ([Cagliari])
A sequential methodology for integrating physical and computer experiments
In advanced industrial sectors, like aerospace, automotive, microelectronics and telecommunications, intensive use of simulation and lab trials is already a daily practice in R\&D activities. In spite of this, there still is no comprehensive approach for integrating physical and simulation experiments in the applied statistical literature. Computer experiments, an autonomous discipline since the end of the eighties (Sacks et al., 1989, Santner et al., 2003), provides a limited view of what a "computer experiment" can be in an industrial setting (computer program is considered expensive to run and its output strictly deterministic) and has practically ignored the "integration" problem. Existing contributions mainly address the problem of calibrating the computer model basing on field data. Kennedy and O'Hagan (2001) and Bayarri et al.(2007) introduced a fully Bayesian approach for modeling also the bias between the computer model and the physical data, thus addressing also model validation, i.e. assessing how well the model represents reality. Nevertheless, in this body of research the role of physical observations is ancillary: they are generally a few and not subject to design.

In the fifties, Box and Wilson (1951) provided a framework, which they called sequential experimentation, for improving industrial systems by physical experiments. Knowledge on the system is built incrementally by organising the investigation as a sequence of related experiments with varying scope (screening, prediction, and optimisation).

A first attempt to introduce such a systemic view in the context of integrated physical and computer experiments is presented in the paper. We envisage a sequential approach where both physical and computer experiments are used in a synergistic way with the goals of improving a real system of interest and validating/improving the computer model. The whole process and stops when a satisfactory level of improvement is realised.

It is important to point out that the two sources of information have a distinct role as they produce information with different degrees of cost (speed) and reliability. In a typical situation where the simulator is cheaper (faster) and the physical set-up is more reliable, it is sensible to use simulation experiments for exploring the space of the design variables in depth in order to get innovative findings, and to use a moderate amount of the costly physical trials for the verification of the findings. If findings obtained by simulation are not confirmed in the field, the computer code should be revised accordingly.

Different decision levels are handled within the framework. High level decisions are whether to stop or continue, whether to conduct the next experiment on the physical system or on its simulator and which is the purpose of the experiment (exploration, improvement, confirmation, model validation). Intermediate level decisions are the location of the experimental region and the run size. L

15:30 to 16:00 Tea
16:00 to 16:30 B Torsney ([Glasgow])
Multiplicative Algorithms: A class of algorithmic methods used in optimal experimental design
Multiplicative algorithms have been considered by several authors. Thus Titterington (1976) proved monotonicity for D-optimality for a specific choice. This latter choice is also monotonic for finding the maximum likelihood estimators of the mixing weights, given data from a mixture of distributions. Indeed it is an EM algorithm; see Torsney (1977). Torsney (1983) proved monotonicity for A-optimality. In fact this extended a result of Fellman (1974) for c-optimality, but he was not focussing on algorithms. Both choices also appear to be monotonic in determining respectively c-optimal and D-optimal conditional designs, i.e. in determining several optimising distributions; see Martin-Martin, Torsney and Fidalgo (2007). Other choices are needed if the criterion function can have negative derivatives, as in some maximum likelihood estimation problems, or if partial derivatives are replaced by vertex directional derivatives. See Torsney (1988) Torsney and Alahmadi (1992) and Torsney and Mandal (2004, 2006). We study a new approach to determining optimal designs, exact or approximate, both for correlated responses and for the uncorrelated case. A simple version of this method, in the case of one design variable (x), is based on transforming a conceived set of design points {xi} on a finite interval to the proportions of the design interval, defined by the sub-intervals between successive points. Methods for determining optimal (design) weights can therefore be used to determine optimal values of these proportions. We explore the potential of this method in a variety of examples encompassing both linear and nonlinear models (some assuming a correlation structure), and a range of criteria including D-Optimality, L-Optimality, C-Optimality. It is also planned to extend this work as follows: 1. An extension is to first transform x to F(x), where F(.) is a distribution function, and then to transform a set of design points to the proportions naturally defined by the differences in the F(.) values of successive design points. This has the advantage of accommodating unbounded design intervals, as occurs in non-linear models, and is a natural choice in binary regression models. 2. A major problem in optimum experimental design theory is concerned with discrimination between several plausible models. We "believe" that using this approach we can obtain T-Optimum designs under some differentiability conditions. 3. We also consider examples with more than one design variable. In this case we transform the design problem to one of optimizing with respect to several distributions.
16:30 to 17:00 D Ucinski ([Zielona Gora])
On construction of constrained optimum designs
A simple computational algorithm is proposed for maximization of a concave function over the set of all convex combinations of a finite number of nonnegative definite matrices subject to additional box constraints on the weights of those combinations. Such problems commonly arise when optimum experimental designs are sought over a design region consisting of finitely many support points, subject to the additional constraints that the corresponding design weights are to remain within certain limits. The underlying idea is to apply a simplicial decomposition algorithm in which the restricted master problem reduces to an uncomplicated weight optimization one. Global convergence to the optimal solution is established and the use of the algorithm is illustrated by examples involving D-optimal design of measurement effort for parameter estimation of a multiresponse chemical kinetics process, as well as sensor selection in a large-scale monitoring network for parameter estimation of a process described by a two-dimensional diffusion equation. Parallelization of the procedure and extensions to general continuous designs are also discussed.
17:00 to 17:30 JP Morgan ([Virginia Tech])
Efficiency, optimality, and differential treatment interest
Standard optimality arguments for designed experiments rest on the assumption that all treatments are of equal interest. One exception is found in the "test treatment versus control" literature, where the control is allocated special status. Optimality work there has focused on all pairwise comparisons with the control, making no explicit account of how well test treatments are compared to one another. In many applications it would be preferable to choose a design depending on the relative importance placed on contrasts involving the control to those of test treatments only. This is an example of where a weighted optimality approach can better reflect experimenter goals. When evaluating designs for comparing $v$ treatments, weights $w_1,\ldots,w_v$ ($\sum_iw_i=1$) can be assigned to account for differential treatment interest. These weights enter the evaluation through optimality measures, leading to, for example, weighted versions of the popular A, E, and MV measures of design efficacy. Families of weighted-optimal designs have been identified for both blocked and unblocked experiments. The theory for weighted optimality leads quite naturally to the notion of weight-balanced designs. Weighted balance and partial balance incorporate the concepts of efficiency balance and its generalizations that have been built on the foundation laid by Jones (1959, JRSS-B 21, 172-179). These balance ideas are closely tied to the weighted E criterion.
18:45 to 19:30 Dinner at Wolfson Court (Residents only)
University of Cambridge Research Councils UK
    Clay Mathematics Institute The Leverhulme Trust London Mathematical Society Microsoft Research NM Rothschild and Sons