09:30 to 10:00 Adaptive designs for dose escalation studies - a simulation studySession: Design for nonlinear models Dose escalation studies are used to find the maximum tolerated dose of a new drug. They are among the first studies where the new drug is used in humans, therefore little prior knowledge about the tolerability of the drug is available. Additionally, ethical restrictions have to be considered. To account for this, adaptive approaches are adequate. Most of the current standard methods like the 3+3-design are not based on optimal design theory suggesting that there is room for improvement. In a simulation study using four different dose-response-scenarios, three adaptive approaches to find the maximum tolerated dose (MTD) are compared. The traditional 3+3-design is compared to a Bayesian approach using the software tool "Bayesian ADEPT". The third approach is a parametric modification of the 3+3-design, where the 3+3-design is conducted until enough information is gathered to construct locally optimal designs based on a logistic model. It is shown that the Bayesian approach performs best in determining the correct MTD, but at the cost of treating a lot of patients at toxic doses, which makes it less feasible for practical use. The 3+3-design is more conservative, tending to underestimate the MTD but treating only few patients at toxic doses. The parametric modification of the 3+3-design has higher chances of finding the correct dose while increasing the risk for the treated patients only very slightly, and therefore is a promising alternative to the traditional 3+3-design. INI 1 10:00 to 10:30 B Bogacka ([QMUL])Adaptive optimum experimental design in Phase I clinical trialsSession: Design for nonlinear models The maximum tolerable dose in Phase I clinical trials may not only carry too much unnecessary risk for patients but may also not be the most efficacious level. This may occur when the efficacy of the drug is unimodal rather than increasing, while the toxicity will be an increasing function of the dose. It may be more beneficial to design a trial so that doses around the so-called Biologically Optimum Dose (BOD) are used more than other dose levels. Zhang at al (2006) presented simulation results for an adaptive design for a variety of models when the response is trinomial ("no response", "success" and "toxicity"). The choice of dose for the next cohort depends on the information gathered from previous cohorts, which provides an updated estimate of BOD for the next experiment. However, this reasonable approach is confined to a sparse grid of dose levels which may be far from the "true"' BOD. In our work we explore the scenarios used by Zhang but search for the BOD over a continuous dose interval. This increases the percentage of patients treated with a good approximation to the "true" BOD. However, more patients may be treated at a high toxicity probability level and so some further restrictions are introduced to increase the safety of the trial. We give examples of the properties of various design strategies and suggest future developments. INI 1 10:30 to 11:00 Designing experiments for an application in laser and surface chemistrySession: Design for nonlinear models Second harmonic generation (SHG) experiments are widely used in Chemistry to investigate the behaviour of interfaces between two phases. We discuss issues arising in planning SHG experiments at the air/liquid interface in order to obtain maximal precision in the subsequent data analysis. An interesting feature of such models is that the unknown model parameters are complex. We provide designs that are optimal for estimating these parameters and discuss robustness issues arising from the non-linearity of the model. INI 1 11:00 to 11:30 Coffee 11:30 to 12:00 P Mueller ([Texas at Houston])Randomized discontinuation designSession: Clinical Trials - Theme opening session Random discontinuation designs (RDD) proceed in two stages. During the first stage all patients are treated with the experimental therapy. A subgroup of patients who show evidence of response during the first stage are then randomized to control and treatment in a second stage. The intention of the design is to identify in the first stage a subpopulation of patients who could potentially benefit from the treatment, and carry out the comparison in the second stage only in that identified subgroup. Most applications are to oncology phase II trials for cytostatic agents. The design is characterized by several tuning parameters: the duration of the preliminary first stage, the number of patients in the trial, and the selection criterium for the second stage. We discuss an optimal choice of the tuning parameters based on a Bayesian decision theoretic framework. We define a probability model for putative cytostatic agents and specify a suitable utility function. A computational procedure to select the optimal decision is illustrated and the efficacy of the proposed approach is evaluated through a simulation study. INI 1 12:00 to 12:30 Adaptive designs for clinical trials with prognostic factors that maximize utilitySession: Clinical Trials - Theme opening session The talk concerns a typical problem in Phase III clinical trials, that is when the number of patients is large. Patients arrive sequentially and are to be allocated to one of $t$ treatments. When the observations all have the same variance an efficient design will be balanced over treatments and over the prognostic factors with which the patients present. However, there should be some randomization in the design, which will lead to slight imbalances. Furthermore, when the responses of earlier patients are already available, there is the ethical concern of allocating more patients to the better treatments, which leads to further imbalance and to some loss of statistical efficiency. The talk will describe the use of the methods of optimum experimental design to combine balance across prognostic factors with a controllable amount of randomization. Use of a utility function provides a specified skewing of the allocation towards better treatments that depends on the ordering of the treatments. The only parameters of the design are the asymptotic proportions of patients to be allocated to the ordered treatments and the extent of randomization. The design is a sophisticated version of those for binary responses that force a prefixed allocation. Comparisons will be made with other rules that employ link functions, where the target proportions depend on the differences between treatments, rather than just on their ranking. If time permits, the extension to binary and survival-time models will be indicated. Mention will be made of the importance of regularization in avoiding trials giving extreme allocations. A simulation study fails to detect the effect of the adaptive design on inference. (Joint work with Atanu Biswas, Kolkata) INI 1 12:30 to 13:30 Lunch at Wolfson Court 14:00 to 14:30 PF Thall ([Texas])Two-stage treatment strategies based on sequential failure timesSession: Theme session 2 For many diseases, therapy involves multiple stages, with treatment in each stage chosen adaptively based on the patient's current disease status and history of previous treatments and outcomes. Physicians routinely use such multi-stage treatment strategies, also called dynamic treatment regimes or treatment policies. In this talk, I will present a Bayesian framework for a clinical trial comparing several two-stage strategies based on the time to overall failure, defined as either second disease worsening or discontinuation of therapy. The design was motivated by a clinical trial, which is currently ongoing, comparing six two-stage strategies for treating advanced kidney cancer. Each patient is randomized among a set of treatments at enrollment, and if disease worsening occurs the patient is then re-randomized among a set of treatments excluding the treatment received initially. The goal is to select the two-stage strategy giving the largest mean overall failure time. A parametric model is formulated to account for non-constant failure time hazards, regression of the second failure time on the patient's first worsening time, and the complications that the failure time in either stage may be interval censored and there may be a delay between first and second stage of therapy. A simulation study in the context of the kidney cancer trial is presented. INI 1 14:30 to 15:00 A Giovagnoli ([Bologna])Inference and ethics in clinical trials for comparing two treatments in the presence of covariatesSession: Theme session 2 In the medical profession physicians are expected to act in the best interests of each patient under their care, and this attitude is also reflected in special ethical considerations surrounding clinical trials. When the trial is adaptive, possible choices are either to try to do what appears to be best at each step for that particular patient, or else to aim at an overall benefit for the entire sample of patients involved in the trial. When the scope of the trial is comparing the probabilities of success of two treatments, a typical example of the former approach is the Play-the-Winner rule, while examples of the latter are trying to minimize the total number of patients assigned to the inferior treatment, or to maximize the number of expected "successes". Clearly the need for ethics and the need for experimental evidence are often conflicting demands, so a compromise is called for. But what weight should be assigned to ethics and what to the inferential criterion? In general, it is reasonable to suppose that the more significantly different the two success probabilities are, the more important an ethical allocation will be. In this presentation we propose a compromise criterion such that the weight of ethics is an increasing function of the absolute difference between the success probabilities. We suggest an adaptive allocation method based on sequential estimation of the unknown target by maximum likelihood, and show that this particular Sequential Maximum Likelihood Design converges to a treatment allocation that optimizes the compromise criterion. The approach is extended to account for the presence of random normal covariates, allowing for treatment-covariate interactions, which makes the need for an ethical allocation even more stringent. A proof of the convergence will be given for this case too. Our design is compared to some existing ones in the literature. INI 1 15:00 to 15:30 T Friede ([Warwick])Flexible designs for late phase clinical trialsSession: Theme session 2 So-called adaptive or flexible designs are recognised as one way of making clinical development of new treatments more efficient and more robust against misspecifications of parameters in the planning phase. In this presentation we give a brief introduction to flexible designs for clinical trials and then focus on two specific adaptations, namely sample size reestimation and treatment selection. Issues in the implementation of such designs will be discussed. INI 1 15:30 to 16:00 Tea 16:00 to 16:30 S Leonov (GlaxoSmithKline)An adaptive optimal design for the Emax model and its application in clinical trialsSession: Theme session 3 We discuss an adaptive design for a first-time-in-human dose-escalation study in patients. A project team working on a compound wished to maximize the efficiency of the study by using doses targeted at maximizing information about the dose-response relationship within certain safety constraints. We have developed an adaptive optimal design tool to recommend doses when the response follows an Emax model, with functionality for pre-trial simulation and in-stream analysis. We describe the methodology based on model-based optimal design techniques and present the results of simulations to investigate the operating characteristics of the applied algorithm. INI 1 16:30 to 17:00 J Godolphin ([Surrey])Selection of cross-over designs in restricted circumstancesSession: Theme session 3 Cross-over designs are used extensively in clinical trials and many other fields. Restrictions in the availability of subjects may result in a set of parameters for which there is no known optimal design. This situation arises, for example, if the subjects comprise patients with a rare medical condition. A new class of cyclic cross-over designs is proposed. Designs in the class are shown to have lower average variances for direct and carry-over pairwise treatment contrasts than cross-over designs previously described in the literature. Consideration is also given to guarding against choosing a design that can become disconnected (and therefore unusable) if a few observations are lost during the period of experimentation. The techniques are illustrated by selection of a design for a clinical trial with specific numbers of treatments and subjects. INI 1 17:00 to 17:30 Design and analysis of experiments applied to critical infrastructure simulationSession: Theme session 3 Critical infrastructures are a complex "system of systems" and interdependent infrastructure simulation models are useful to assess consequences of disruptions initiated in any infrastructure. A risk-informed decision support tool using systems dynamics methods has been developed at Los Alamos National Laboratory to provide an efficient running simulation tool to gain insight for making critical infrastructure protection related decisions in the presence of uncertainty. Modeling of consequences of an infectious disease outbreak provides a case study and opportunity to demonstrate exploratory statistical experiment planning and analysis capability. In addition to modeling consequences of an incident, alternative mitigation strategies can be implemented and consequences under these alternatives compared. Statistical analyses include screening, sensitivity and uncertainty analysis in addition to designing experiments which are sets of simulation runs for comparing relative consequences from implementation of different mitigation strategies. INI 1 18:45 to 19:30 Dinner at Wolfson Court (Residents only)
 09:45 to 10:00 Poster storm II INI 1 10:00 to 11:00 Poster session 11:00 to 11:30 Coffee 11:30 to 12:00 Optimal design for special kernel computer experimentsSession: Computer experiments - Theme opening session There are not many exact results for the optimality of experimental designs in the context of space-filling design for computer experiments. In addition there is some disparity between optimal designs for classical regression models, such as D-optimum designs, and designs for Gaussian process models, such as maximum entropy sampling (MES). In both cases one can talk about "kernels". In the first we can think of the regression models as given by kernels. In the second we have a "covariance kernel". There is a link provided by the Karhunen-Loeve expansion of the covariance function. These issues are covered but most time is spent on important design-kernel pairs where there are hard optimality results and design solutions are space-filling in nature. The two main examples covered are multidimensional Fourier models where lattice design are D-optimal and some recent work on Haar wavelets where Sobol sequences are D-optimal. INI 1 12:00 to 12:30 Can we design for smoothing parameters?Session: Computer experiments - Theme opening session When we analyse computer experiments we usually use an emulator (or surrogate). Emulators are based on Gaussian processes with parameters estimated from a designed experiments. Most effort in the design of computer experiments has concentrated on the idea of 'space filling' design, such as the Latin hypercube. However an important parameter in the emulator is its smoothness. Intuition suggests that adding some points closer together should improve our estimates of smoothness over the standard space filling designs. Using some ideas from geostatistics we investigate whether we can improve our designs in this way. INI 1 12:30 to 13:30 Lunch at Wolfson Court 14:00 to 14:30 Nested space-filling designsSession: Theme session 2 Computer experiments with different levels of accuracy have become prevalent in many engineering and scientific applications. Design construction for such computer experiments is a new issue because the existing methods deal almost exclusively with computer experiments with one level of accuracy. In this talk, I will discuss the construction of some nested space-filling designs for computer experiments with different levels of accuracy. Our construction makes use of Galois fields and orthogonal arrays. As a related topic, I will also discuss the construction of suitable space-filling designs for computer experiments with qualitative and quantitative factors. This is joint work with Boxin Tang at Simon Fraser University and C. F. Jeff Wu at Georgia Tech. INI 1 14:30 to 15:00 Sequential calibration of computer modelsSession: Theme session 2 We propose a sequential method for the estimation of calibration parameters for computer models. The goal is to find the values of the calibration parameters that bring a computer simulation into best'' agreement with data from a physical experiment. In this method, we first fit separate Gaussian Stochastic Process(GASP) models to given data from a physical and a computer experiment. The values of the calibration parameters that minimize the discrepancy between predictions from the two models, are taken as the estimates. In the second step, the point with maximum potential for reducing the uncertainty in the fitted model is identified. The Computer experiment is conducted at this new point. The first step is repeated with the augmented data set, the calibration parameters re-estimated, and the next design point determined. The method is repeated until the allocated budget for the number of design points are exhausted or the calibration parameters' estimates are satisfactory. Empirical results shows effectiveness of the sequential procedure in achieving faster convergence to the estimates of the calibration parameters when a unique best estimate exists. INI 1 15:00 to 15:30 A sequential methodology for integrating physical and computer experimentsSession: Theme session 2 In advanced industrial sectors, like aerospace, automotive, microelectronics and telecommunications, intensive use of simulation and lab trials is already a daily practice in R\&D activities. In spite of this, there still is no comprehensive approach for integrating physical and simulation experiments in the applied statistical literature. Computer experiments, an autonomous discipline since the end of the eighties (Sacks et al., 1989, Santner et al., 2003), provides a limited view of what a "computer experiment" can be in an industrial setting (computer program is considered expensive to run and its output strictly deterministic) and has practically ignored the "integration" problem. Existing contributions mainly address the problem of calibrating the computer model basing on field data. Kennedy and O'Hagan (2001) and Bayarri et al.(2007) introduced a fully Bayesian approach for modeling also the bias between the computer model and the physical data, thus addressing also model validation, i.e. assessing how well the model represents reality. Nevertheless, in this body of research the role of physical observations is ancillary: they are generally a few and not subject to design.In the fifties, Box and Wilson (1951) provided a framework, which they called sequential experimentation, for improving industrial systems by physical experiments. Knowledge on the system is built incrementally by organising the investigation as a sequence of related experiments with varying scope (screening, prediction, and optimisation).A first attempt to introduce such a systemic view in the context of integrated physical and computer experiments is presented in the paper. We envisage a sequential approach where both physical and computer experiments are used in a synergistic way with the goals of improving a real system of interest and validating/improving the computer model. The whole process and stops when a satisfactory level of improvement is realised.It is important to point out that the two sources of information have a distinct role as they produce information with different degrees of cost (speed) and reliability. In a typical situation where the simulator is cheaper (faster) and the physical set-up is more reliable, it is sensible to use simulation experiments for exploring the space of the design variables in depth in order to get innovative findings, and to use a moderate amount of the costly physical trials for the verification of the findings. If findings obtained by simulation are not confirmed in the field, the computer code should be revised accordingly.Different decision levels are handled within the framework. High level decisions are whether to stop or continue, whether to conduct the next experiment on the physical system or on its simulator and which is the purpose of the experiment (exploration, improvement, confirmation, model validation). Intermediate level decisions are the location of the experimental region and the run size. L INI 1 15:30 to 16:00 Tea 16:00 to 16:30 Multiplicative Algorithms: A class of algorithmic methods used in optimal experimental designSession: Design construction and optimality Multiplicative algorithms have been considered by several authors. Thus Titterington (1976) proved monotonicity for D-optimality for a specific choice. This latter choice is also monotonic for finding the maximum likelihood estimators of the mixing weights, given data from a mixture of distributions. Indeed it is an EM algorithm; see Torsney (1977). Torsney (1983) proved monotonicity for A-optimality. In fact this extended a result of Fellman (1974) for c-optimality, but he was not focussing on algorithms. Both choices also appear to be monotonic in determining respectively c-optimal and D-optimal conditional designs, i.e. in determining several optimising distributions; see Martin-Martin, Torsney and Fidalgo (2007). Other choices are needed if the criterion function can have negative derivatives, as in some maximum likelihood estimation problems, or if partial derivatives are replaced by vertex directional derivatives. See Torsney (1988) Torsney and Alahmadi (1992) and Torsney and Mandal (2004, 2006). We study a new approach to determining optimal designs, exact or approximate, both for correlated responses and for the uncorrelated case. A simple version of this method, in the case of one design variable (x), is based on transforming a conceived set of design points {xi} on a finite interval to the proportions of the design interval, defined by the sub-intervals between successive points. Methods for determining optimal (design) weights can therefore be used to determine optimal values of these proportions. We explore the potential of this method in a variety of examples encompassing both linear and nonlinear models (some assuming a correlation structure), and a range of criteria including D-Optimality, L-Optimality, C-Optimality. It is also planned to extend this work as follows: 1. An extension is to first transform x to F(x), where F(.) is a distribution function, and then to transform a set of design points to the proportions naturally defined by the differences in the F(.) values of successive design points. This has the advantage of accommodating unbounded design intervals, as occurs in non-linear models, and is a natural choice in binary regression models. 2. A major problem in optimum experimental design theory is concerned with discrimination between several plausible models. We "believe" that using this approach we can obtain T-Optimum designs under some differentiability conditions. 3. We also consider examples with more than one design variable. In this case we transform the design problem to one of optimizing with respect to several distributions. INI 1 16:30 to 17:00 On construction of constrained optimum designsSession: Design construction and optimality A simple computational algorithm is proposed for maximization of a concave function over the set of all convex combinations of a finite number of nonnegative definite matrices subject to additional box constraints on the weights of those combinations. Such problems commonly arise when optimum experimental designs are sought over a design region consisting of finitely many support points, subject to the additional constraints that the corresponding design weights are to remain within certain limits. The underlying idea is to apply a simplicial decomposition algorithm in which the restricted master problem reduces to an uncomplicated weight optimization one. Global convergence to the optimal solution is established and the use of the algorithm is illustrated by examples involving D-optimal design of measurement effort for parameter estimation of a multiresponse chemical kinetics process, as well as sensor selection in a large-scale monitoring network for parameter estimation of a process described by a two-dimensional diffusion equation. Parallelization of the procedure and extensions to general continuous designs are also discussed. INI 1 17:00 to 17:30 Efficiency, optimality, and differential treatment interestSession: Design construction and optimality Standard optimality arguments for designed experiments rest on the assumption that all treatments are of equal interest. One exception is found in the "test treatment versus control" literature, where the control is allocated special status. Optimality work there has focused on all pairwise comparisons with the control, making no explicit account of how well test treatments are compared to one another. In many applications it would be preferable to choose a design depending on the relative importance placed on contrasts involving the control to those of test treatments only. This is an example of where a weighted optimality approach can better reflect experimenter goals. When evaluating designs for comparing $v$ treatments, weights $w_1,\ldots,w_v$ ($\sum_iw_i=1$) can be assigned to account for differential treatment interest. These weights enter the evaluation through optimality measures, leading to, for example, weighted versions of the popular A, E, and MV measures of design efficacy. Families of weighted-optimal designs have been identified for both blocked and unblocked experiments. The theory for weighted optimality leads quite naturally to the notion of weight-balanced designs. Weighted balance and partial balance incorporate the concepts of efficiency balance and its generalizations that have been built on the foundation laid by Jones (1959, JRSS-B 21, 172-179). These balance ideas are closely tied to the weighted E criterion. INI 1 18:45 to 19:30 Dinner at Wolfson Court (Residents only)